Cambridge researchers Dr Claudia Bonfio, Dr Akshay Deshmukh and Dr Elizabeth Radford have been awarded UKRI Future Leader Fellowships, which provides up to seven years of support to enable them to tackle ambitious programmes or multidisciplinary questions, and new or emerging research and innovation areas and partnerships.
Dr Claudia Bonfio’s lab in the Department of Biochemistry studies how life emerges from non-living matter and tries to answer this question by designing and building active primitive cells. Her Future Leader Fellowship project addresses this evolutionary question through an approach that bridges chemistry and biophysics, by investigating how the synergy between primitive lipids and peptides led to the emergence of membrane proteins – a hallmark of living cells.
Dr Akshay Deshmukh is returning to Cambridge’s Department of Chemical Engineering and Biotechnology from MIT to take up his Future Leader Fellowship. To reach net zero by 2050, we will require seven times more critical metals than we produce today. Current extraction methods use large amounts of energy, water, chemicals, and land. During his Fellowship, Deshmukh will develop new processes to recover metals from sources like brines and recycling streams. His research combines experiments, spectroscopy, and mechanistic studies to create a framework for designing next-generation membranes, and aims to speed up the development of cheaper, more sustainable separation technologies.
Dr Elizabeth Radford is a paediatric neurologist based in the Department of Paediatrics, whose research is working to accelerate diagnosis and expand the treatment options for children affected by neurodevelopmental genetic conditions. Everyone carries small genetic changes, and while most are harmless, some disrupt how the proteins in our cells work and can affect a child’s development. However, it isn’t always clear which changes cause problems, making diagnosis slow and uncertain. During her Fellowship, Radford will study thousands of genetic changes by recreating them in human cells grown in the lab. This will show which changes damage proteins, helping doctors interpret genetic tests and provide earlier diagnoses, and paving the way for future treatments.
UK Research and Innovation’s (UKRI) Future Leaders Fellowships fund allows universities and businesses to develop talented early career researchers and innovators and attract new people to their organisations, including from overseas.
Out of the successful applications, thirteen projects are led by businesses and funded by Innovate UK.
To support excellent research and innovation wherever it arises and to facilitate movement of people and projects between sectors, FLF fellows are based in the most appropriate environment for their projects, be that universities, businesses, charities, or other independent research organisations.
The Fellowship allows the individual to devote their time to tackle challenging research and innovation problems and to develop their careers as they become the next wave of world-class research and innovation leaders.
The Fellowship also allows recipients access to the FLF Development Network, which provides specialised leadership training, access to networks, workshops, mentors, one-to-one coaching, and opportunities for additional seed-funding for collaborative projects.
“UKRI’s Future Leaders Fellowships provide researchers and innovators with long-term support and training to embark on large and complex research programmes, to address key national and global challenges,” said Frances Burstow, Director of Talent and Skills at UKRI. “The programme supports the research and innovation leaders of the future to transcend disciplinary and sector boundaries, bridging the gap between academia and business. UKRI supports excellence across the entire breadth of its remit, supporting early-career researchers to lessen the distance from discovery to real world impact.”
“UKRI’s Future Leaders Fellowships offer long-term support to outstanding researchers, helping them turn bold ideas into innovations that improve lives and livelihoods in the UK and beyond,” said UKRI Chief Executive, Professor Sir Ian Chapman. “These fellowships continue to drive excellence and accelerate the journey from discovery to public benefit. I wish them every success.”
Three Cambridge researchers are among 77 early-career researchers who have been awarded a total of £120 million to lead vital research, collaborate with innovators and develop their careers as the research and innovation leaders of the future.
The Robotics and AI Institute (RAI) in Zurich Oerlikon celebrated the opening of its new centre on 15 September. Marco Hutter, ETH professor of robotic systems and director of the RAI office in Zurich, sat down with us to talk about the new opportunities that the research centre opens up and how it came about.
From fully believing that he had bungled one of his speeches to emerging champion at his first international public speaking competition, NUS Law final-year student Kamal Ashraf Bin Kamil Jumat’s experience at the World Universities Public Speaking Invitational Championship 2025 was quite the rollercoaster ride.
The University of Macau invited 11 prestigious universities, including NUS, Tsinghua University, Korea University, the University of Oxford and the University of British Columbia, to send representatives to the inaugural edition of the championship that it hosted on 30 August 2025. Participants were required to prepare and deliver a speech based on the theme “Creating a Diverse Future Together”, answer questions from a panel of judges, and deliver an impromptu speech based on a randomly selected topic.
When NUS was invited to field a student speaker for the competition, Ms Sim Ee Waun, a tutor at the Department of Communications and New Media (CNM), immediately recognised her student Kamal’s potential to do well on this stage and put his name into the pool of nominees. After he was selected, she mentored him in the lead-up to the competition.
Despite battling nervousness as a first-time competitor in a field of high-calibre speakers, Kamal completed the first two segments with relative confidence, trusting in his pre-competition training and rehearsals. His prepared speech, entitled “Bringing Tomorrow”, discussed how diversity grows through active, consistent efforts and used stories such as the origins of Singlish and his own experiences as a firefighter in National Service to encourage listeners to build trust through communication.
For the third segment, he hoped to draw topics relating to Education or Culture from the three possible categories of impromptu speech topics, as they could relate more closely to his experiences in youth and community volunteering.
Instead, he drew “How do friendships shape our personal growth?” from the Friendship category.
With just 10 seconds of preparation time, he recalled Ms Sim’s advice to “give a speech only you can give” and rapidly composed a speech using the philosophical metaphor of tabula rasa to frame anecdotes of how his multicultural friendships have shaped him.
Kamal felt extremely nervous during the impromptu speech and left the stage disappointed with his performance. He immediately texted Ms Sim: “I don’t think I made it.”
Ms Sim was watching the livestreamed competition and disagreed with his assessment. She described his impromptu speech as “brilliant,” noting: “Not only was his speech eloquent and personal, it carried good points too. Believe me, it is not easy to do this in 10 seconds.”
She added: “His win reminds us that on the world stage, we should expect to win. Why not? We may be a little red dot, but we do punch way above our weight, and Kamal is a shining example of that.”
Although Kamal was unhappy with the speech at the time, it ended up being a meaningful and precious element of his win.
“I managed to talk about how my friends from other races have always been allies to me, helping me to feel comfortable in preserving my traditions and my daily practices. These anecdotes didn’t fit into my prepared speech, so I’m glad I got to present these messages that are important to me, at a time when Singapore is celebrating SG60, diversity and multiculturalism,” he said.
Speaking from the heart
As a debater since primary school and now a law student, Kamal is no stranger to speaking before judges and an audience. He currently serves as a part-time debate coach for a local secondary school and chairs a national debating academy organised by the Malay Youth Literary Association. His law training also includes polishing his trial advocacy and interpersonal communication skills through pro bono projects at the Syariah Court.
In his quest to learn more about the art of public speaking, he took up Public Speaking and Critical Reasoning offered by CNM as an elective course in Semester 2 of AY2024/2025. “I wanted to learn the technical skills behind speaking in front of people and connecting with them — something a bit more based on personality and more heartfelt, rather than doing it as part of a job,” said Kamal, noting that he learnt how to tailor material to the audience and craft speeches in different styles.
Nominating him for the competition was an easy decision, said Ms Sim. “In my five years of teaching public speaking at NUS, Kamal stands out as an incredibly talented public speaker who speaks with conviction and polish, and he is sharp as a pin. So when there was a call to nominate students for this competition, it was only natural that I put his name forward.”
Kamal spent the second half of the semester and part of his summer writing and rewriting his draft with Ms Sim’s help via WhatsApp and Zoom calls, while juggling an internship that left him with little time to rehearse until he arrived in Macau two days before the competition.
He started his days in Macau at 6am to rehearse intensively in his hotel room before heading out for activities organised by the University of Macau, such as soundchecks and workshops with his fellow competitors. Although most of the contestants were friendly, Kamal grew nervous as he learnt about their competitive speaking experience and training.
“These were some of the best public speakers I’ve ever met,” he said. “They were fantastic and they came from very established institutions that are well known for their craft in the English language, so the pressure was there.”
His nervousness was offset by the knowledge that he had supportive family and friends who would be happy for him regardless of the results. His mother frequently reminded him to just enjoy himself, and his friends refrained from pressuring him to come back with a win.
“They knew there was no point in adding pressure onto me, because they knew I was going to do it myself anyway,” said Kamal in jest.
He came away from the experience with a deep respect for the other contestants and a drive to continue improving his craft. When he was asked to give an acceptance speech with some of his public speaking tips at the competition’s closing dinner, Kamal expressed his admiration for his fellow competitors by asking them to share their own tips with him.
“Some of them had styles which I couldn’t pull off. Sofia (Lopez Castillo, first runner-up) from the University of British Columbia — her style was very emotional, very heartfelt. I’m a bit too clean or refined sometimes, and it’s hard for me to dig deep and hit those emotive notes,” he said. “For every one word of advice I was asked to give, I knew I had ten times as much to learn in return.”
Another outstanding speaker he recalled was second runner-up Benjamin Thomas from Stanford University, who was impressively calm in his delivery, especially when answering the impromptu questions.
Kamal concluded: “There are elements that I want to take from all 10 other speakers to become a better speaker, because this is just at the university level and I can see all these different skills that I need to pick up. The biggest takeaway for me is that there’s a lot more that I can learn.”
The full recording of the World Universities Public Speaking Invitational Championship 2025 is available on YouTube.
The ground-shaking that an earthquake generates is only a fraction of the total energy that a quake releases. A quake can also generate a flash of heat, along with a domino-like fracturing of underground rocks. But exactly how much energy goes into each of these three processes is exceedingly difficult, if not impossible, to measure in the field.
Now MIT geologists have traced the energy that is released by “lab quakes” — miniature analogs of natural earthquakes that are carefully triggered in a controlled laboratory setting. For the first time, they have quantified the complete energy budget of such quakes, in terms of the fraction of energy that goes into heat, shaking, and fracturing.
They found that only about 10 percent of a lab quake’s energy causes physical shaking. An even smaller fraction — less than 1 percent — goes into breaking up rock and creating new surfaces. The overwhelming portion of a quake’s energy — on average 80 percent — goes into heating up the immediate region around a quake’s epicenter. In fact, the researchers observed that a lab quake can produce a temperature spike hot enough to melt surrounding material and turn it briefly into liquid melt.
The geologists also found that a quake’s energy budget depends on a region’s deformation history — the degree to which rocks have been shifted and disturbed by previous tectonic motions. The fractions of quake energy that produce heat, shaking, and rock fracturing can shift depending on what the region has experienced in the past.
“The deformation history — essentially what the rock remembers — really influences how destructive an earthquake could be,” says Daniel Ortega-Arroyo, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “That history affects a lot of the material properties in the rock, and it dictates to some degree how it is going to slip.”
The team’s lab quakes are a simplified analog of what occurs during a natural earthquake. Down the road, their results could help seismologists predict the likelihood of earthquakes in regions that are prone to seismic events. For instance, if scientists have an idea of how much shaking a quake generated in the past, they might be able to estimate the degree to which the quake’s energy also affected rocks deep underground by melting or breaking them apart. This in turn could reveal how much more or less vulnerable the region is to future quakes.
“We could never reproduce the complexity of the Earth, so we have to isolate the physics of what is happening, in these lab quakes,” says Matěj Peč, associate professor of geophysics at MIT. “We hope to understand these processes and try to extrapolate them to nature.”
Peč (pronounced “Peck”) and Ortega-Arroyo reported their results on Aug. 28 in the journal AGU Advances. Their MIT co-authors are Hoagy O’Ghaffari and Camilla Cattania, along with Zheng Gong and Roger Fu at Harvard University and Markus Ohl and Oliver Plümper at Utrecht University in the Netherlands.
Under the surface
Earthquakes are driven by energy that is stored up in rocks over millions of years. As tectonic plates slowly grind against each other, stress accumulates through the crust. When rocks are pushed past their material strength, they can suddenly slip along a narrow zone, creating a geologic fault. As rocks slip on either side of the fault, they produce seismic waves that ripple outward and upward.
We perceive an earthquake’s energy mainly in the form of ground shaking, which can be measured using seismometers and other ground-based instruments. But the other two major forms of a quake’s energy — heat and underground fracturing — are largely inaccessible with current technologies.
“Unlike the weather, where we can see daily patterns and measure a number of pertinent variables, it’s very hard to do that very deep in the Earth,” Ortega-Arroyo says. “We don’t know what’s happening to the rocks themselves, and the timescales over which earthquakes repeat within a fault zone are on the century-to-millenia timescales, making any sort of actionable forecast challenging.”
To get an idea of how an earthquake’s energy is partitioned, and how that energy budget might affect a region’s seismic risk, he and Peč went into the lab. Over the last seven years, Peč’s group at MIT has developed methods and instrumentation to simulate seismic events, at the microscale, in an effort to understand how earthquakes at the macroscale may play out.
“We are focusing on what’s happening on a really small scale, where we can control many aspects of failure and try to understand it before we can do any scaling to nature,” Ortega-Arroyo says.
Microshakes
For their new study, the team generated miniature lab quakes that simulate a seismic slipping of rocks along a fault zone. They worked with small samples of granite, which are representative of rocks in the seismogenic layer — the geologic region in the continental crust where earthquakes typically originate. They ground up the granite into a fine powder and mixed the crushed granite with a much finer powder of magnetic particles, which they used as a sort of internal temperature gauge. (A particle’s magnetic field strength will change in response to a fluctuation in temperature.)
The researchers placed samples of the powdered granite — each about 10 square millimeters and 1 millimeter thin — between two small pistons and wrapped the ensemble in a gold jacket. They then applied a strong magnetic field to orient the powder’s magnetic particles in the same initial direction and to the same field strength. They reasoned that any change in the particles’ orientation and field strength afterward should be a sign of how much heat that region experienced as a result of any seismic event.
Once samples were prepared, the team placed them one at a time into a custom-built apparatus that the researchers tuned to apply steadily increasing pressure, similar to the pressures that rocks experience in the Earth’s seismogenic layer, about 10 to 20 kilometers below the surface. They used custom-made piezoelectric sensors, developed by co-author O’Ghaffari, which they attached to either end of a sample to measure any shaking that occurred as they increased the stress on the sample.
They observed that at certain stresses, some samples slipped, producing a microscale seismic event similar to an earthquake. By analyzing the magnetic particles in the samples after the fact, they obtained an estimate of how much each sample was temporarily heated — a method developed in collaboration with Roger Fu’s lab at Harvard University. They also estimated the amount of shaking each sample experienced, using measurements from the piezoelectric sensor and numerical models. The researchers also examined each sample under the microscope, at different magnifications, to assess how the size of the granite grains changed — whether and how many grains broke into smaller pieces, for instance.
From all these measurements, the team was able to estimate each lab quake’s energy budget. On average, they found that about 80 percent of a quake’s energy goes into heat, while 10 percent generates shaking, and less than 1 percent goes into rock fracturing, or creating new, smaller particle surfaces.
“In some instances we saw that, close to the fault, the sample went from room temperature to 1,200 degrees Celsius in a matter of microseconds, and then immediately cooled down once the motion stopped,” Ortega-Arroyo says. “And in one sample, we saw the fault move by about 100 microns, which implies slip velocities essentially about 10 meters per second. It moves very fast, though it doesn’t last very long.”
The researchers suspect that similar processes play out in actual, kilometer-scale quakes.
“Our experiments offer an integrated approach that provides one of the most complete views of the physics of earthquake-like ruptures in rocks to date,” Peč says. “This will provide clues on how to improve our current earthquake models and natural hazard mitigation.”
This research was supported, in part, by the National Science Foundation.
A scanning electron photomicrograph highlights a region of rock that slipped during a laboratory-induced earthquake. The “flowy” central area represents a portion of the rock that was melted and turned into glass due to intense frictional heating.
Caught between circular ideals, social realities and forgotten traditions, architecture must become more efficient but also more adaptable. Five perspectives on the future of construction.
By digitalising their supply chains, start-up Labwear Studios is able to manufacture fashion labels and garments in small quantities, thereby helping to combat overproduction in the fashion industry.
In a real-world lab in Jurapark Aargau, researchers and the local community are testing out ideas for a sustainable future. Here, knowledge flows both ways.
ETH professor and architect Arno Schlueter on sustainable building, a new aesthetic, the role of teaching – and why climate-friendly solutions start with design.
Former ETH Excellence Scholar Arianna Arpagaus works in the Children’s Research Center at the University Children’s Hospital Zurich researching and developing targeted therapies for children with leukaemia.
Economy is doing OK. So why are Americans so pessimistic about their prospects?
Scholars say government statistics can miss lived experience, politics taking larger role in shaping perceptions
Christina Pazzanese
Harvard Staff Writer
4 min read
There is some uncertainty mostly driven by the new global tariffs, but overall the U.S. economy is doing reasonably well, economists say.
Still, Americans seem to be feeling disproportionately pessimistic about their economic prospects for reasons that aren’t totally clear to scholars — and may not be directly connected to the economy itself.
The Federal Reserve is widely expected to cut interest rates this week. Core inflation remains around 3 percent — 3.1 percent for August, according to federal data released Thursday. Unemployment hit 4.3 percent in August, above a record low of 3.4 in April 2023 but far below the pandemic high of 14.7 percent in 2020.
Those numbers aren’t great but still look “pretty good,” said Karen Dynan, former chief economist at the U.S. Treasury and a professor of the practice of economics and public policy at the Faculty of Arts & Sciences and Harvard Kennedy School.
But that sentiment appears out of sync with how most Americans are feeling: Only 25 percent believe they have a good chance to improve their standard of living, the lowest share since the stock market crashed in 1987, according to a recent Wall Street Journal/NORC poll.
More than 75 percent say they are not confident the next generation will have a higher standard of living than they do, the poll also showed.
“A lot of the pessimism doesn’t seem consistent” with the data, Dynan said.
So why the disconnect?
“It’s not that Americans or the data are wrong — consumers do have legitimate concerns. It’s that some of the financial pressures people are feeling, like increased financing costs for auto loans or closing costs on home mortgages, don’t necessarily show up in the major datasets like the Consumer Price Index,” said economist Stefanie Stantcheva, whose Social Economics Lab at Harvard studies how people understand economic issues and policies.
Economists Karen Dynan (from left), Kenneth Rogoff, and Stefanie Stantcheva.
Harvard file photos
Government statistics tend to take a very broad view so geographic and demographic disparities or variations across industries and sectors often get overlooked, Dynan said.
“A lot of the data we have speak to conditions in the economy overall. The unemployment rate that we look at is for the nation as a whole; the GDP number is about how the entire pie is growing,” she said. “The data that we have on how individuals are doing is more limited and less timely.”
For instance, lower-income people often face higher inflation than the wealthy, something known as inflation inequality. That’s not captured by the usual economic measures, leaving economists with an incomplete picture of people’s “lived experiences” with the economy, Stantcheva said.
“A lot of people’s feeling of satisfaction and well-being is relative to what they’re used to, what they see their neighbors enjoying, and increasingly, what they see on the internet,” said Kenneth Rogoff, professor of economics and Maurits C. Boas Chair of International Economics at Harvard.
25%Of Americans believe they have a good chance to improve their standard of living, according to a recent WSJ/NORC poll
Pressure that rising financing costs for things like credit cards and car loans are putting on consumers, or anxiety over the high price of housing across the U.S. aren’t getting quantified by economic reports. Still, they are leaving many pretty grim about their economic future, especially young adults who also face an increasingly tough job market, said Rogoff.
Of late, Stantcheva notes, there’s been a rise in “zero-sum thinking” about the economy.
“This idea that if you do well, or a group of people does well, it means someone else must be doing worse, someone else must be losing. We see that much more pronounced among younger generations, not just in the U.S., also in other rich countries,” she said.
Politics also now plays an outsized role in shaping what the public knows about the economy and how they perceive it, whether negative or positive. People tend to give more weight to how their trusted political leaders or favorite news outlets characterize the economy than what government statistics seem to show, said Dynan.
A confluence of changes that consumers experience daily — like continuing high prices, remote work and other perks employers offered only a couple of years ago but have since clawed back, and staffing and funding cuts across the federal government earlier this year — are starting to be felt locally as institutions and services people see and use, like schools, healthcare, and transportation, close or face cutbacks.
“I think some of these social changes may be weighing on people and then end up expressed as views about the economy,” Dynan said.
Traditionally, developing new materials for cutting-edge applications — such as SpaceX’s Raptor engine — has taken a decade or more. But thanks to a breakthrough technology pioneered by an MIT research group now celebrating its 40th year, a key material for the Raptor was delivered in just a few years. The same innovation has accelerated the development of high-performance materials for the Apple Watch, U.S. Air Force jets, and Formula One race cars.
The MIT Steel Research Group (SRG) also led to a national initiative that “has already sparked a paradigm shift in how new materials are discovered, developed, and deployed,” according to a White House story describing the Materials Genome Initiative’s first five years.
Gregory B. Olson founded the SRG in 1985 with the goal of using computers to accelerate the hunt for new materials by plumbing databases of those materials’ fundamental properties. It was the beginning of a new field: computational materials design.
At the time, “nobody knew whether we could really do this,” remembers Olson, a professor of the practice in the Department of Materials Science and Engineering. “I have some documented evidence of agencies resisting the entire concept because, in their opinion, a material could never be designed.”
Eventually, however, Olson and colleagues showed that the approach worked. One of the most important results: In 2011 President Barack Obama made a speech “essentially announcing that this technology is real and it’s what everybody should be doing,” says Olson, who is also affiliated with the Materials Research Laboratory. In the speech, Obama launched the Materials Genome Initiative (MGI).
The MGI is developing “a fundamental database of the parameters that direct the assembly of the structures of materials,” much like the Human Genome Project “is a database that directs the assembly of the structures of life,” says Olson.
The goal is to use the MGI database to discover, manufacture, and deploy advanced materials twice as fast, and at a fraction of the cost, compared to traditional methods, according to the MGI website.
At MIT, the SRG continues to focus on steel, “because it’s the material [the world has] studied the longest, so we have the deepest fundamental understanding of its properties,” says Olson, project principal investigator.
The Cybersteels Project, funded by the Office of Naval Research, brings together eight MIT faculty who are working to expand our knowledge of steel, eventually adding their data to the MGI. Major areas of study include the boundaries between the microscopic grains that make up a steel and the economic modeling of new steels.
Concludes Olson, “it has been tremendously satisfying to see how this technology has really blossomed in the hands of leading corporations and led to a national initiative to take it even further.”
MIT’s Steel Research Group, which is celebrating its 40th year, led to a major national materials initiative launched by President Barack Obama in 2011. MIT Professor of the Practice Gregory Olson (third from left), who founded the SRG, was invited to the White House for a review of the Materials Genome Initiative at its fifth anniversary.
Rita joins the University from the University of London, where she has been Pro-Vice-Chancellor (Finance and Operations) since 2020 and has led a major transformation programme across its finance, digital, estates and HR services.
She has more than 30 years of experience in financial leadership across higher education, infrastructure investment, housing, and the charity sector. She is a Fellow of the Institute of Chartered Accountants in England and Wales, and of the Association of Corporate Treasurers.
In parallel with her career in university leadership, Rita serves as Chair of the Audit Committee and Non-Executive Director at HICL Infrastructure plc, a FTSE 250-listed £3bn investment fund with over 100 infrastructure assets across the UK, Europe, the US, Australia, and New Zealand, supporting education, health, utilities, communication and transport.
Rita will report to the Vice-Chancellor and provide strategic oversight of the University’s financial activities.
She will also lead and manage the University’s Finance Division, and be the sponsor for the Finance Transformation Programme, which is modernising ways of working through new processes, technology and governance.
Anthony Odgers, the University’s current Chief Financial Officer, will step down from his role on 31 December 2025.
Professor Deborah Prentice, Vice-Chancellor, said: "I am delighted to welcome Rita as our new Chief Financial Officer. Rita impressed the interview panel with her vast experience, particularly in finance transformation, her passion for higher education and her commitment to inclusive leadership."
Rita said: "Joining the University of Cambridge is a tremendous honour. I am inspired by the opportunity to lead a transformative finance agenda that supports the University's long-term strategic ambitions. I look forward to working collaboratively across the University to build a finance function that is modern, transparent, and aligned with Cambridge’s world-leading mission."
Rita Akushie has been appointed as the University’s new Chief Financial Officer. She will take up the role in December 2025.
I look forward to working collaboratively across the University to build a finance function that is modern, transparent, and aligned with Cambridge’s world-leading mission.
It’s September at Harvard and all the signs are here. Pathways under the canopy of American elms and honey locusts in Harvard Yard are bustling with undergraduates coming and going from academic buildings, dining halls, and residences. Classrooms are brimming with a mixture of excitement and nervousness. Reviews of course outlines and syllabi are quickly giving way to teaching, group discussions, and problem-solving. In lecture halls, seminar rooms, and open studios, students begin to consider topics as wide-ranging as oceanography, archaeology, evolution and disease, rare books, and what qualities make a voice memorable.
Students in Harvard Yard.
Veasey Conway/Harvard Staff Photographer
“A Humanities Colloquium: from Homer to Joyce”
Glenda Carpio, one of six professors who leads the course covering 2,500 years of essential works, makes introductory remarks.
Stephanie Mitchell/Harvard Staff Photographer
Louis Menand listens to his fellow professors.
Stephanie Mitchell/Harvard Staff Photographer
Stephanie Mitchell/Harvard Staff Photographer
Assistant Professor of English Tara Menon implores students to “do the reading.”
Stephanie Mitchell/Harvard Staff Photographer
“Harvard’s Greatest Hits: The Most Important, Rarest, and Most Valuable Books in Houghton Library”
David Stern (far right) leads the class in the Dana-Palmer House.
Niles Singer/Harvard Staff Photographer
Niles Singer/Harvard Staff Photographer
Niles Singer/Harvard Staff Photographer
“Can We Know Our Past?”
Rowan Flad lectures in the general education course.
Stephanie Mitchell/Harvard Staff Photographer
Stephanie Mitchell/Harvard Staff Photographer
Archaeology Professor Jason Ur (right) speaks to a student after class.
Stephanie Mitchell/Harvard Staff Photographer
“Observing the Ocean: Measurements and Instrumentation”
Fiamma Straneo lectures in a Geological Museum Building seminar room.
Stephanie Mitchell/Harvard Staff Photographer
Michelle Diep ’27.
Stephanie Mitchell/Harvard Staff Photographer
Erick Contreras-Rodriguez (left) and Madison Codding, both ’27.
Stephanie Mitchell/Harvard Staff Photographer
Said El Kadi ’26.
Stephanie Mitchell/Harvard Staff Photographer
“Introduction to Voice and Speech”
Students take part in an exercise in Farkas Hall.
Veasey Conway/Harvard Staff Photographer
Eliana Heo ’27 discusses public figures with memorable voices during a small group activity.
Veasey Conway/Harvard Staff Photographer
Veasey Conway/Harvard Staff Photographer
Students perform body movements in the Theater, Dance & Media class.
Veasey Conway/Harvard Staff Photographer
“Evolutionary Medicine”
Christopher Kuzawa (left), Professor of Human Evolutionary Biology, answers a question from Alexander Merheb ’27.
Niles Singer/Harvard Staff Photographer
Akram Tahar Chaouch ’29.
Niles Singer/Harvard Staff Photographer
Ally Ah Cook ’26.
Niles Singer/Harvard Staff Photographer
The course takes place in the Museum of Comparative Zoology.
New faculty Cécile Fromont is a visual problem solver
Cécile Fromont believes that something new can be revealed about an artwork or artifact every time someone lays eyes on it. It’s one of the reasons she loves the classroom environment, where she and her students can examine objects of visual and material culture together and surprise one another with their perspectives.
“I really, truly, and cheesily love the experience of being amazed at what other people can see and the insights that they can bring that other people in the room would never in a million years have thought about,” said Fromont, professor of the history of art and architecture. “To have that experience over and over again is, for me, one of the great gifts of being a teacher.”
Fromont, an art historian specializing in the visual, material, and religious cultures of Africa, Latin America, and Europe in the early modern period, joined the Department of History of Art and Architecture in 2024, and will begin teaching this fall after a year of research leave. She is also the inaugural faculty director of the Alain Locke Gallery of African & African American Art (formerly the Ethelbert Cooper Gallery) at the Hutchins Center for African & African American Research.
“This is an historic moment for the Hutchins Center and indeed for Harvard as a whole,” said Henry Louis Gates, Jr., the Alphonse Fletcher University Professor and director of the Hutchins Center. “As the only art space at a major university devoted exclusively to exhibiting and exploring African, African American, and Afro-Latin American art, the Alain Locke Gallery now has at its helm an art historian whose work reflects the breadth and depth of this vibrant field of study. Widely respected for the range and originality of her critical scholarship, Cécile Fromont will bring a rigorous, cosmopolitan intelligence to her leadership of the Locke Gallery.”
Fromont’s expertise centers on cross-cultural interactions in the Atlantic world from 1500 to 1850, including as a result of the slave trade. She has long been fascinated by the global significance of the events concentrated in that time and place in history.
“What keeps me interested in studying that time and place is the ways in which the interactions, the connections, the fights, the tragedies, the shipwrecks, and the triumphs of that moment have defined and continued to shape the world that we live in, in terms of the structures of politics, geopolitics, knowledge, and artistic practices,” Fromont said. “It really allows me to understand many of the challenges that we’re facing today and to imagine possible ways forward.”
Part of the appeal of her work, she said, is the chance to play detective, using documents and visual objects to solve historical mysteries. One early example was her first book, which examined the debate over whether Christianity truly shaped the Kingdom of Kongo during the early modern time, or if its influence was superficial. Fromont found that the period’s visual culture — fashion, architecture, crucifixes, figurines, and even currency — supported the existence of a distinctly Kongolese Christian worldview during that time.
“I’ve always been a little bit contrarian. When people say, ‘We’ll never know,’ that usually piques my interest, and I try to see if that is really true,” Fromont said. “That’s perhaps why I have created this field of study for myself: it’s very problem-driven. Identifying a visual problem and trying to get to the bottom of it has been the main motor of my research agenda.”
This semester, Fromont is one of six professors co-teaching “HAA10: Introduction to the History of Art.” In the spring, she will teach the graduate seminar “Africa and the Atlantic World,” and a first-year seminar titled “Making Monsters in the Atlantic World,” a course about what visualizations of monsters in the Atlantic corridor in the early modern period can teach us about cross-cultural encounters, oppression, and control.
“I really try to create a community within the class where we know each other. Then we can discuss ideas as scholars at different stages of thinking about the material,” Fromont said. “I may have been thinking about some of those things for more years than some of the scholars in the class, but there is material that I encounter anew too, and then we are all thinking on our feet.”
“I’ve always been a little bit contrarian. When people say, ‘We’ll never know,’ that usually piques my interest, and I try to see if that is really true.”
Fromont is working on several book projects. One, titled “Objects of Power,” looks at 18th-century protective amulets created by African ritual practitioners in Europe and the Americas — small cloth bundles filled with items like holy hosts, sulfur, prayers, and bones — that lent power and protection to their users and drew scrutiny from both civil and religious authorities such as the Portuguese Inquisition or French courts.
“It tells us about the nature of power and the mode of its exercise in that crucial moment in the history of the Atlantic world,” Fromont said.
Another book project, “The Discreet Charm of the ‘Old Indies,’” re-examines French Baroque tapestries depicting Brazil and aristocrats from the Kingdom of Kongo from a European perspective as imagined exotic tableaux both showing and hiding colonialism and Atlantic slavery, and the question of whether it is still appropriate to display these artworks today.
“The question is: As a society, what do we choose to see in these objects?” said Fromont, who is working with artist Sammy Baloji to create a new tapestry featuring a historically accurate scene, which will be displayed in the Netherlands in October. “How do we make a decision as a society about what we make visible and what we make invisible, what we choose to see and what we choose not to see in these objects?”
At the Alain Locke Gallery, Fromont will curate her first on-campus exhibition this spring, a show on art from the French- and Creole-speaking Caribbean. The gallery will open “Renaissance, Race, and Representation in the Harmon and Harriet Kelley Collection of African American Art,” 65 works from one of the country’s finest collections of African American art along with selections from the Hutchins Center’s own collection, on Sept. 30.
“One of the strengths of this gallery is that it’s very nimble and creative by design in the way that it conceives its exhibitions and programing, which creates unique possibilities in terms of what we can bring to the Harvard and Boston community,” Fromont said. “It’s a rare place where you can make an exhibition that is scholarship-led and focused on a research problem, and then the next one may be celebrating the aesthetics of an artistic or vernacular practice. It is a space of possibilities that in many ways channels the spirit of Alain Locke the scholar, but also Alain Locke the art collector, the educator, and the community builder.”
For pregnant women, ultrasounds are an informative (and sometimes necessary) procedure. They typically produce two-dimensional black-and-white scans of fetuses that can reveal key insights, including biological sex, approximate size, and abnormalities like heart issues or cleft lip. If your doctor wants a closer look, they may use magnetic resonance imaging (MRI), which uses magnetic fields to capture images that can be combined to create a 3D view of the fetus.
MRIs aren’t a catch-all, though; the 3D scans are difficult for doctors to interpret well enough to diagnose problems because our visual system is not accustomed to processing 3D volumetric scans (in other words, a wrap-around look that also shows us the inner structures of a subject). Enter machine learning, which could help model a fetus’s development more clearly and accurately from data — although no such algorithm has been able to model their somewhat random movements and various body shapes.
That is, until a new approach called “Fetal SMPL” from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), Boston Children’s Hospital (BCH), and Harvard Medical School presented clinicians with a more detailed picture of fetal health. It was adapted from “SMPL” (Skinned Multi-Person Linear model), a 3D model developed in computer graphics to capture adult body shapes and poses, as a way to represent fetal body shapes and poses accurately. Fetal SMPL was then trained on 20,000 MRI volumes to predict the location and size of a fetus and create sculpture-like 3D representations. Inside each model is a skeleton with 23 articulated joints called a “kinematic tree,” which the system uses to pose and move like the fetuses it saw during training.
The extensive, real-world scans that Fetal SMPL learned from helped it develop pinpoint accuracy. Imagine stepping into a stranger’s footprint while blindfolded, and not only does it fit perfectly, but you correctly guess what shoe they wore — similarly, the tool closely matched the position and size of fetuses in MRI frames it hadn’t seen before. Fetal SMPL was only misaligned by an average of about 3.1 millimeters, a gap smaller than a single grain of rice.
The approach could enable doctors to precisely measure things like the size of a baby’s head or abdomen and compare these metrics with healthy fetuses at the same age. Fetal SMPL has demonstrated its clinical potential in early tests, where it achieved accurate alignment results on a small group of real-world scans.
“It can be challenging to estimate the shape and pose of a fetus because they’re crammed into the tight confines of the uterus,” says lead author, MIT PhD student, and CSAIL researcher Yingcheng Liu SM ’21. “Our approach overcomes this challenge using a system of interconnected bones under the surface of the 3D model, which represent the fetal body and its motions realistically. Then, it relies on a coordinate descent algorithm to make a prediction, essentially alternating between guessing pose and shape from tricky data until it finds a reliable estimate.”
In utero
Fetal SMPL was tested on shape and pose accuracy against the closest baseline the researchers could find: a system that models infant growth called “SMIL.” Since babies out of the womb are larger than fetuses, the team shrank those models by 75 percent to level the playing field.
The system outperformed this baseline on a dataset of fetal MRIs between the gestational ages of 24 and 37 weeks taken at Boston Children’s Hospital. Fetal SMPL was able to recreate real scans more precisely, as its models closely lined up with real MRIs.
The method was efficient at lining up their models to images, only needing three iterations to arrive at a reasonable alignment. In an experiment that counted how many incorrect guesses Fetal SMPL had made before arriving at a final estimate, its accuracy plateaued from the fourth step onward.
The researchers have just begun testing their system in the real world, where it produced similarly accurate models in initial clinical tests. While these results are promising, the team notes that they’ll need to apply their results to larger populations, different gestational ages, and a variety of disease cases to better understand the system’s capabilities.
Only skin deep
Liu also notes that their system only helps analyze what doctors can see on the surface of a fetus, since only bone-like structures lie beneath the skin of the models. To better monitor babies’ internal health, such as liver, lung, and muscle development, the team intends to make their tool volumetric, modeling the fetus’s inner anatomy from scans. Such upgrades would make the models more human-like, but the current version of Fetal SMPL already presents a precise (and unique) upgrade to 3D fetal health analysis.
“This study introduces a method specifically designed for fetal MRI that effectively captures fetal movements, enhancing the assessment of fetal development and health,” says Kiho Im, Harvard Medical School associate professor of pediatrics and staff scientist in the Division of Newborn Medicine at BCH’s Fetal-Neonatal Neuroimaging and Developmental Science Center. Im, who was not involved with the paper, adds that this approach “will not only improve the diagnostic utility of fetal MRI, but also provide insights into the early functional development of the fetal brain in relation to body movements.”
“This work reaches a pioneering milestone by extending parametric surface human body models for the earliest shapes of human life: fetuses,” says Sergi Pujades, an associate professor at University Grenoble Alpes, who wasn’t involved in the research. “It allows us to detangle the shape and motion of a human, which has already proven to be key in understanding how adult body shape relates to metabolic conditions and how infant motion relates to neurodevelopmental disorders. In addition, the fact that the fetal model stems from, and is compatible with, the adult (SMPL) and infant (SMIL) body models, will allow us to study human shape and pose evolution over long periods of time. This is an unprecedented opportunity to further quantify how human shape growth and motion are affected by different conditions.”
Liu wrote the paper with three CSAIL members: Peiqi Wang SM ’22, PhD ’25; MIT PhD student Sebastian Diaz; and senior author Polina Golland, the Sunlin and Priscilla Chou Professor of Electrical Engineering and Computer Science, a principal investigator in MIT CSAIL, and the leader of the Medical Vision Group. BCH assistant professor of pediatrics Esra Abaci Turk, Inria researcher Benjamin Billot, and Harvard Medical School professor of pediatrics and professor of radiology Patricia Ellen Grant are also authors on the paper. This work was supported, in part, by the National Institutes of Health and the MIT CSAIL-Wistron Program.
The researchers will present their work at the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in September.
Fetal SMPL was trained on 20,000 MRI volumes to predict the location and size of a fetus and create sculpture-like 3D representations. The approach could enable doctors to precisely measure things like the size of a baby’s head and compare these metrics with healthy fetuses at the same age.
For pregnant women, ultrasounds are an informative (and sometimes necessary) procedure. They typically produce two-dimensional black-and-white scans of fetuses that can reveal key insights, including biological sex, approximate size, and abnormalities like heart issues or cleft lip. If your doctor wants a closer look, they may use magnetic resonance imaging (MRI), which uses magnetic fields to capture images that can be combined to create a 3D view of the fetus.
MRIs aren’t a catch-all, though; the 3D scans are difficult for doctors to interpret well enough to diagnose problems because our visual system is not accustomed to processing 3D volumetric scans (in other words, a wrap-around look that also shows us the inner structures of a subject). Enter machine learning, which could help model a fetus’s development more clearly and accurately from data — although no such algorithm has been able to model their somewhat random movements and various body shapes.
That is, until a new approach called “Fetal SMPL” from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), Boston Children’s Hospital (BCH), and Harvard Medical School presented clinicians with a more detailed picture of fetal health. It was adapted from “SMPL” (Skinned Multi-Person Linear model), a 3D model developed in computer graphics to capture adult body shapes and poses, as a way to represent fetal body shapes and poses accurately. Fetal SMPL was then trained on 20,000 MRI volumes to predict the location and size of a fetus and create sculpture-like 3D representations. Inside each model is a skeleton with 23 articulated joints called a “kinematic tree,” which the system uses to pose and move like the fetuses it saw during training.
The extensive, real-world scans that Fetal SMPL learned from helped it develop pinpoint accuracy. Imagine stepping into a stranger’s footprint while blindfolded, and not only does it fit perfectly, but you correctly guess what shoe they wore — similarly, the tool closely matched the position and size of fetuses in MRI frames it hadn’t seen before. Fetal SMPL was only misaligned by an average of about 3.1 millimeters, a gap smaller than a single grain of rice.
The approach could enable doctors to precisely measure things like the size of a baby’s head or abdomen and compare these metrics with healthy fetuses at the same age. Fetal SMPL has demonstrated its clinical potential in early tests, where it achieved accurate alignment results on a small group of real-world scans.
“It can be challenging to estimate the shape and pose of a fetus because they’re crammed into the tight confines of the uterus,” says lead author, MIT PhD student, and CSAIL researcher Yingcheng Liu SM ’21. “Our approach overcomes this challenge using a system of interconnected bones under the surface of the 3D model, which represent the fetal body and its motions realistically. Then, it relies on a coordinate descent algorithm to make a prediction, essentially alternating between guessing pose and shape from tricky data until it finds a reliable estimate.”
In utero
Fetal SMPL was tested on shape and pose accuracy against the closest baseline the researchers could find: a system that models infant growth called “SMIL.” Since babies out of the womb are larger than fetuses, the team shrank those models by 75 percent to level the playing field.
The system outperformed this baseline on a dataset of fetal MRIs between the gestational ages of 24 and 37 weeks taken at Boston Children’s Hospital. Fetal SMPL was able to recreate real scans more precisely, as its models closely lined up with real MRIs.
The method was efficient at lining up their models to images, only needing three iterations to arrive at a reasonable alignment. In an experiment that counted how many incorrect guesses Fetal SMPL had made before arriving at a final estimate, its accuracy plateaued from the fourth step onward.
The researchers have just begun testing their system in the real world, where it produced similarly accurate models in initial clinical tests. While these results are promising, the team notes that they’ll need to apply their results to larger populations, different gestational ages, and a variety of disease cases to better understand the system’s capabilities.
Only skin deep
Liu also notes that their system only helps analyze what doctors can see on the surface of a fetus, since only bone-like structures lie beneath the skin of the models. To better monitor babies’ internal health, such as liver, lung, and muscle development, the team intends to make their tool volumetric, modeling the fetus’s inner anatomy from scans. Such upgrades would make the models more human-like, but the current version of Fetal SMPL already presents a precise (and unique) upgrade to 3D fetal health analysis.
“This study introduces a method specifically designed for fetal MRI that effectively captures fetal movements, enhancing the assessment of fetal development and health,” says Kiho Im, Harvard Medical School associate professor of pediatrics and staff scientist in the Division of Newborn Medicine at BCH’s Fetal-Neonatal Neuroimaging and Developmental Science Center. Im, who was not involved with the paper, adds that this approach “will not only improve the diagnostic utility of fetal MRI, but also provide insights into the early functional development of the fetal brain in relation to body movements.”
“This work reaches a pioneering milestone by extending parametric surface human body models for the earliest shapes of human life: fetuses,” says Sergi Pujades, an associate professor at University Grenoble Alpes, who wasn’t involved in the research. “It allows us to detangle the shape and motion of a human, which has already proven to be key in understanding how adult body shape relates to metabolic conditions and how infant motion relates to neurodevelopmental disorders. In addition, the fact that the fetal model stems from, and is compatible with, the adult (SMPL) and infant (SMIL) body models, will allow us to study human shape and pose evolution over long periods of time. This is an unprecedented opportunity to further quantify how human shape growth and motion are affected by different conditions.”
Liu wrote the paper with three CSAIL members: Peiqi Wang SM ’22, PhD ’25; MIT PhD student Sebastian Diaz; and senior author Polina Golland, the Sunlin and Priscilla Chou Professor of Electrical Engineering and Computer Science, a principal investigator in MIT CSAIL, and the leader of the Medical Vision Group. BCH assistant professor of pediatrics Esra Abaci Turk, Inria researcher Benjamin Billot, and Harvard Medical School professor of pediatrics and professor of radiology Patricia Ellen Grant are also authors on the paper. This work was supported, in part, by the National Institutes of Health and the MIT CSAIL-Wistron Program.
The researchers will present their work at the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in September.
Fetal SMPL was trained on 20,000 MRI volumes to predict the location and size of a fetus and create sculpture-like 3D representations. The approach could enable doctors to precisely measure things like the size of a baby’s head and compare these metrics with healthy fetuses at the same age.
This spring, J-PAL North America — a regional office of MIT’s Abdul Latif Jameel Poverty Action Lab (J-PAL) — launched its first ever Learning Lab, centered on climate action. The Learning Lab convened a cohort of government leaders who are enacting a broad range of policies and programs to support the transition to a low-carbon economy. Through the Learning Lab, participants explored how to embed randomized evaluation into promising solutions to determine how to maximize changes in behavior — a strategy that can help advance decarbonization in the most cost-effective ways to benefit all communities. The inaugural cohort included more than 25 participants from state agencies and cities, including the Massachusetts Clean Energy Center, the Minnesota Housing Finance Agency, and the cities of Lincoln, Nebraska; Newport News, Virginia; Orlando, Florida; and Philadelphia.
“State and local governments have demonstrated tremendous leadership in designing and implementing decarbonization policies and climate action plans over the past few years,” said Peter Christensen, scientific advisor of the J-PAL North America Environment, Energy, and Climate Change Sector. “And while these are informed by scientific projections on which programs and technologies may effectively and equitably reduce emissions, the projection methods involve a lot of assumptions. It can be challenging for governments to determine whether their programs are actually achieving the expected level of emissions reductions that we desperately need. The Climate Action Learning Lab was designed to support state and local governments in addressing this need — helping them to rigorously evaluate their programs to detect their true impact.”
From May to July, the Learning Lab offered a suite of resources for participants to leverage rigorous evaluation to identify effective and equitable climate mitigation solutions. Offerings included training lectures, one-on-one strategy sessions, peer learning engagements, and researcher collaboration. State and local leaders built skills and knowledge in evidence generation and use, reviewed and applied research insights to their own programmatic areas, and identified priority research questions to guide evidence-building and decision-making practices. Programs prioritized for evaluation covered topics such as compliance with building energy benchmarking policies, take-up rates of energy-efficient home improvement programs such as heat pumps and Solar for All, and scoring criteria for affordable housing development programs.
“We appreciated the chance to learn about randomized evaluation methodology, and how this impact assessment tool could be utilized in our ongoing climate action planning. With so many potential initiatives to pursue, this approach will help us prioritize our time and resources on the most effective solutions,” said Anna Shugoll, program manager at the City of Philadelphia’s Office of Sustainability.
This phase of the Learning Lab was possible thanks to grant funding from J-PAL North America’s longtime supporter and collaborator Arnold Ventures. The work culminated in an in-person summit in Cambridge, Massachusetts, on July 23, where Learning Lab participants delivered a presentation on their jurisdiction’s priority research questions and strategic evaluation plans. They also connected with researchers in the J-PAL network to further explore impact evaluation opportunities for promising decarbonization programs.
“The Climate Action Learning Lab has helped us identify research questions for some of the City of Orlando’s deep decarbonization goals. J-PAL staff, along with researchers in the J-PAL network, worked hard to bridge the gap between behavior change theory and the applied, tangible benefits that we achieve through rigorous evaluation of our programs,” said Brittany Sellers, assistant director for sustainability, resilience and future-ready for Orlando. “Whether we’re discussing an energy-efficiency policy for some of the biggest buildings in the City of Orlando or expanding [electric vehicle] adoption across the city, it’s been very easy to communicate some of these high-level research concepts and what they can help us do to actually pursue our decarbonization goals.”
The next phase of the Climate Action Learning Lab will center on building partnerships between jurisdictions and researchers in the J-PAL network to explore the launch of randomized evaluations, deepening the community of practice among current cohort members, and cultivating a broad culture of evidence building and use in the climate space.
“The Climate Action Learning Lab provided a critical space for our city to collaborate with other cities and states seeking to implement similar decarbonization programs, as well as with researchers in the J-PAL network to help rigorously evaluate these programs,” said Daniel Collins, innovation team director at the City of Newport News. “We look forward to further collaboration and opportunities to learn from evaluations of our mitigation efforts so we, as a city, can better allocate resources to the most effective solutions.”
The Climate Action Learning Lab is one of several offerings under the J-PAL North America Evidence for Climate Action Project. The project’s goal is to convene an influential network of researchers, policymakers, and practitioners to generate rigorous evidence to identify and advance equitable, high-impact policy solutions to climate change in the United States. In addition to the Learning Lab, J-PAL North America will launch a climate special topic request for proposals this fall to fund research on climate mitigation and adaptation initiatives. J-PAL will welcome applications from both research partnerships formed through the Learning Lab as well as other eligible applicants.
Local government leaders, researchers, potential partners, or funders committed to advancing climate solutions that work, and who want to learn more about the Evidence for Climate Action Project, may email na_eecc@povertyactionlab.org or subscribe to the J-PAL North America Climate Action newsletter.
The Climate Action Learning Lab Summit served as a culmination of three months of programming to build participants' skills and knowledge in evidence generation and use in the climate space.
Autistic people experience poorer mental and physical health and live shorter lives than the general population. They are significantly more likely than non-autistic people to die by suicide. Recent estimates suggest that one in three autistic people has experienced suicidal ideation and nearly one in four has attempted suicide.
In a study published today in Autism, researchers from the University of Cambridge and Bournemouth University found that of more than 1,000 autistic adults surveyed, only one in four reached out to the NHS the last time they experienced suicidal thoughts or behaviours.
Among those who did not seek NHS support, the most common reasons were that they believed the NHS could not help them (48%), that they tried to cope alone (54%), or that they felt there was “no point” due to long waiting lists for mental health services (43%). Many participants commented that the NHS’s limited range of mental health services was not suitable for “people like us”.
Just over a third (36%) of participants who did not seek NHS support reported previous negative experiences with the NHS, while a similar number (34%) said they had had bad experiences specifically when seeking help for suicidality – and more than one in 10 (12%) said they had been turned away or had a referral rejected.
One in four participants (25%) said they feared consequences such as being sectioned. Others highlighted practical barriers, suggesting they could not face trying to get an appointment with their GP (34%). No participants said they didn’t want to be stopped.
This study also corroborates findings that certain gender groups may experience even greater barriers to accessing NHS support. Analysis by the team at Bournemouth and Cambridge showed that among the participants, cisgender women and those who were transgender or gender-divergent were more likely to have had negative experiences, while transgender and gender-divergent autistic people were especially likely to fear that they would not be believed by NHS staff.
Co-lead author Dr Tanya Procyshyn from the Autism Research Centre at the University of Cambridge said: “Our findings make it clear that autistic people do want support when they are struggling with suicidality, but many have been let down by a system that they experience as inaccessible, unhelpful, or even harmful. Without urgent reform to make services trustworthy and better suited to autistic people’s needs, preventable deaths will continue.”
This study offers new insights on significantly higher suicide rates among the autistic population, a stark reality recognised by the Government’s inclusion of autistic people as a priority group in the 2023 Suicide Prevention Strategy. The authors note that policy commitments must lead to meaningful service changes, such as autism-informed training for healthcare professionals, alternatives to phone-based appointment booking, and flexible, autism-adapted mental health services. They stress that these changes must be co-designed with autistic people to ensure acceptability and rebuild trust.
Co-lead author, Dr Rachel Moseley from the Department of Psychology at Bournemouth University, said: “We know from other research that healthcare professionals don’t receive sufficient training to help them work effectively with autistic people. Our work shows that when faced with autistic people in crisis, clinicians often overlook these signs, or react in a way that causes further damage. For these reasons, it’s imperative that the government takes steps to address inequalities that prevent autistic people from accessing healthcare that could save their lives.”
Professor Sir Simon Baron-Cohen, Director of the Autism Research Centre at Cambridge and the senior author on the team, added: “There is a mental health crisis in the autism community, with one in four autistic adults planning or attempting suicide. This is unacceptably high. Although the UK Government has finally now recognised autistic people as a high-risk group in relation to suicide, the essential changes that could prevent these unnecessary deaths are not materialising fast enough.
“We are glad that Autism Action, the charity that funds a number of our suicide prevention studies, is translating the research into policy and practice, but we need to see a massive injection of funding into support services to avert multiple future tragedies.”
The research was instigated by the charity Autism Action as part of its mission to reduce the number of autistic people who think about, attempt and die by suicide.
Tom Purser, CEO of Autism Action, said: “It is unacceptable that our health service fails autistic people at the time of their greatest need. Autistic people want help but barriers in the form of inaccessible systems, poor attitudes and lack of training are preventing this, and in one in ten cases people are being turned away or rejected.
“The recently published Learning from Lives and Deaths report, focused on people with a learning disability and autistic people, highlighted that a lack of access to the right support is a massive factor that leads to premature deaths. We know a better system is possible – the Government must now lead the way to save lives.”
In the UK and Ireland, Samaritans can be contacted on freephone 116 123, or email jo@samaritans.org or jo@samaritans.ie. Alternatively, you can contact PAPYRUS (Prevention of Young Suicide) HOPELINE247 on 0800 068 4141 or by texting 88247.
Reference
‘I did not think they could help me’: Autistic adults’ reasons for not seeking public healthcare when they last experienced suicidality. Autism; 15 Sept 2025
Life-saving opportunities to prevent suicide among autistic people are being missed because systemic barriers make it difficult for them to access NHS support during times of mental health crisis, according to new research.
Without urgent reform to make services trustworthy and better suited to autistic people’s needs, preventable deaths will continue
Now in its 11th year, the Award invites young people aged 14-18 from across the UK to submit stories of up to 1,000 words. It was created to discover and inspire the next generation of writers and is a cross-network collaboration between BBC Radio 1 and BBC Radio 4.
This year’s shortlist features five young female writers whose stories explore contemporary themes ranging from toxic masculinity and inter-generational relations to climate change, power and responsibility. Praised as ‘beautifully subversive,’ ‘nuanced’ and ‘mature,’ the shortlisted works range from a dark tale told from the perspective of a black cat to a mythological retelling of the climate crisis, a lyrical portrait of three generations of women cooking together, a supernatural ‘housewife’s revenge’ story, and a sharp look at peer pressure and toxic masculinity
Dr Elizabeth Rawlinson-Mills, University Associate Professor in the Faculty of Education and Fellow of Robinson College, said:
“It's a pleasure once again to read these remarkable and often startling stories. We have become accustomed to the shortlisted stories for the YWA offering us reassuring evidence of young writers' skill and ambition. This year's shortlist, with work that experiments with voice and violence, bodies and gender, things unspoken and unspeakable, feels especially timely. These are stories that look both outwards and inwards, and which confront the reader powerfully. The University of Cambridge is extremely proud to support the Young Writers Award.”
The shortlisted stories are:
‘Wildfolk Report 2025’ by Holly Dye, 17, from Tunbridge Wells
‘Adu, Lasun and Marcha’ by Anoushka Patel, 18, from Leicester
‘Roast Beef’ by Edith Taussig, 17, from New Malden, Greater London
‘The Omen’ by Anna Tuchinda, 17, from Thailand, an international student in Edinburgh
‘Scouse’s Run’ by Rebecca Smith, 17, from Sheffield
The five stories will be available to listen to on BBC Sounds, read by actors including Amit Shah, Maggie Service, Priya Kansara, Sam Pitcher and Andy Clark. Interviews with the writers are also available to listen to, and can be read on the BBC Radio 1 website.
The winner will be announced at the BBC Short Story Awards ceremony at Broadcasting House on Tuesday 30 September, broadcast live on BBC Radio 4’s Front Row, with the winning writer also appearing on Radio 1’s Life Hacks.
The partnership also offers unique professional development opportunities for Cambridge PhD students, who take part in a BBC shadowing scheme, gaining experience in cultural engagement and public communication.
Cambridge's long-term partnership with both the BBC National Short Story Award and the BBC Young Writers’ Award, is led by Dr Bonnie Lander Johnson (Fellow and Associate Professor in English at Downing and Newnham Colleges) and Dr Elizabeth Rawlinson-Mills (University Associate Professor in the Faculty of Education and Fellow of Robinson College).
Dr Lander Johnson said:
“The National Short Story Awards continue to be the largest and most prestigious awards of their kind in the UK. I am proud to represent the University on this partnership; I believe we have a role to play in supporting the production of literary excellence in Britain. Storytelling is an essential human impulse through which we reflect on our changing world, inspire younger generations, and make sense of our collective and individual lives. It is essential that Cambridge University remains part of such crucial cultural work. Who are we if we cannot tell our stories?”
About the Award
Since its launch in 2015, the BBC Young Writers’ Award has highlighted some of the most talented young voices in the country. Previous winners include Lottie Mills, Tabitha Rubens, Elena Barham, Atlas Weyland Eden and Lulu Frisson, with many going on to secure further prizes, publications and acclaim.
The 2025 judging panel is chaired by Radio 1 presenter Lauren Layfield, joined by poet and former Children’s Laureate Joseph Coelho, novelist Jessica Moor, poet Matt Goodfellow, and 2020 Young Writers’ Award winner Lottie Mills.
The shortlist for the 2025 BBC Young Writers’ Award with Cambridge University was announced Sunday 14 September, live on BBC Radio 1’s Life Hacks.
This year's shortlist, with work that experiments with voice and violence, bodies and gender, things unspoken and unspeakable, feels especially timely. These are stories that look both outwards and inwards, and which confront the reader powerfully
NUS Chief Information Technology Officer, Ms Tan Shui-Min, has been named Transformational CIO of the Year at the SPARK Asia Digital Leaders Award 2025, which honours Chief Information Officers across Asia who have delivered game-changing outcomes through digital innovation.
Under Ms Tan’s visionary leadership, NUS has reimagined its digital landscape — from developing AI-Know, a university-wide platform that democratises AI access, to implementing green IT infrastructure that advances both operational efficiency and sustainability. AI-Know has introduced tools such as AI-Minute for automated meeting summaries and AI-Create for building no-code digital agents, saving the University more than 265,000 man-hours in 2024 and set to generate millions in cost savings and cost avoidance.
She has also championed sustainability through the Green Data Centre Network Upgrade Project, which reduced power consumption by 45 per cent and rack space by 49 per cent, resulting in significant energy and cost savings while advancing NUS’ Green Plan 2030 commitments.
Collaboration is a hallmark of Ms Tan’s leadership. She has pioneered a co-creation model with more than 120 departments, working closely with colleagues to identify pain points and design fit-for-purpose AI solutions. Examples include the laboratory inspection tool with the Office of Risk Management and Compliance, and chatbots with the Offices of Finance and Human Resources. These partnerships ensure business relevance, foster shared ownership, and accelerate adoption. She also spearheaded My Virtual Lab, NUS’ first 3D immersive virtual laboratory, which redefines STEM education by enabling interactive, risk-free training and collaborative simulations.
Looking ahead, NUS is scaling up research capabilities through expanded high-performance computing facilities, currently ranked among the world’s top 110 supercomputers. At the same time, initiatives such as NUSHub and the NextGeneSIS programme are enhancing student engagement and optimising the learner experience across both degree and lifelong learning pathways.
These initiatives are guided by the IT Strategic Plan blueprint envisioned and driven by Ms Tan in 2023. Anchored by four strategic goals — Digitally Empowered Business, Cyber Fortress, Flexible Infrastructure, and Future-Ready IT Workforce — the roadmap ensures technology initiatives are closely aligned with NUS’ mission and vision, supporting both operational excellence and future growth.
Reflecting on the award, Ms Tan said: “I am deeply honoured by this recognition, which is a testament to the collective spirit of NUS. It reflects our shared commitment to building a ‘Borderless University, Powered by Infinite Technology,’ where innovation is open and accessible to all. My guiding principle has always been to empower others – fostering an environment where staff and students feel encouraged to experiment, learn, and thrive.
Looking ahead, I remain committed to advancing our digital transformation, ensuring that technology not only keeps pace with the evolving needs of our university but truly enriches the everyday experiences of our community.”
The University’s flagship sustainability festival, NUS Sustainability CONNECT, returns for its third edition this September – featuring close to 40 events throughout the month!
While the festival has typically showcased sustainability research and best practices, there is a growing momentum across the wider NUS community to incorporate green initiatives into day-to-day work. More NUS units outside the traditional sustainability sphere are recognising the value and relevance of sustainability and are eager to share their efforts.
For instance, this year’s CONNECT welcomes partners such as NUS Libraries, NUS College (NUSC) and University Health Centre (UHC), all of which are organising events to engage and inspire the broader NUS community on sustainability efforts.
Walking the talk in sustainability: NUS Libraries’ guided tours and solar panel exhibition
Did you know that BookBridge on Level 2 of NUS Central Library was among the first structures in Singapore to use tropical mass engineered timber, a renewable and sustainable building material?
Choosing eco-friendly construction materials is just one of the many ways NUS Libraries is making sustainability a daily reality. To showcase its efforts, NUS Libraries will be conducting a guided tour of the Central Library on 17 September 2025 to share how it is walking the talk in sustainability, including reducing its carbon footprint in daily operations.
In recent years, NUS Libraries has been steadily developing its four sustainability pillars – green libraries; green resources; green programming; and green digital and innovation – in support of the Singapore Green Plan 2030 and the United Nations' Sustainable Development Goals.
Beyond this, NUS Libraries will also be hosting a guided tour to a garden that was set up on campus in the early 1980s. Happening on 24 September 2025, this historical garden under the care of the Department of Biological Sciences, houses legacy plants, such as the cycads, which have been around in the garden since its founding years. Rarely open to the public, the garden offers a glimpse into the space that supports the department’s outreach and teaching activities.
In addition, NUS Central Library is running an exhibition – Solar Energising Heritage: Coloured Solar Panel Development for Historical Shophouses – from now till 6 October 2025. This collaborative project with the Solar Energy Research Institute of Singapore (SERIS), NUS Museum and NUS Baba House features custom-designed coloured Building-Integrated Photovoltaics (BIPV) panels developed for NUS Baba House, using SERIS’ unique dotlism pattern technology.
Going forward, NUS Libraries is committed to paving the way in advancing sustainability in teaching, learning and research.
Inspiring change: Student-led sustainability initiatives at NUS College’s Impact Festival
Committed to delivering interdisciplinary education, NUSC will present the Impact Festival on 17 September 2025 – a vibrant showcase of student projects from the Impact Experience programme. This is a capstone course where students collaborate with communities in Singapore and across Southeast Asia to drive meaningful social change.
Given that sustainability is inherently interdisciplinary, it is no surprise that a number of these projects focused on sustainability-related themes such as transforming food waste management, promoting sustainable tourism, protecting animal welfare, restoring coral reefs and cultivating sustainable agriculture.
One notable example is a project led by Team Bayanihan (which means unity in Tagalog) comprising seven NUSC students. For decades, bamboo has been viewed as the poor man’s timber in the Philippines. To change this mindset, the team developed a unique bamboo-based curriculum to help Filipino youth appreciate this sustainable resource that is readily available in their own backyard.
In partnership with Grow School Philippines, Team Bayanihan conducted three workshops in Nasugbu, Batangas, teaching hundreds of young people the art of bamboo weaving and the science of structural design. The goal was to showcase bamboo’s versatility, both as a material for crafts and a medium for constructing larger structures, to the next generation of Filipinos. Through this, Team Bayanihan hopes to inspire youth across the Philippines to embrace the beauty and utility of bamboo in building a more sustainable future.
Championing sustainability through healthy eating: UHC’s sustainable granola workshop
Eating healthily and supporting the planet can go hand in hand, and UHC is keen to demonstrate how.
It is conducting a sustainable granola workshop on 19 September 2025 to share with participants how spent grains can be upcycled into nutritious granola. Guided by experts in the sustainable food innovation sector, participants will not only create and sample their own sustainable granola, but also take home a recipe kit on how to make these wholesome grains.
The complimentary workshop for NUS staff, which was fully booked within a day, is part of the Centre’s Feel Good Friday series, which explores practical ways for the NUS community to adopt greener habits while maintaining a balanced diet and nutrition.
Encouraged by the strong response, UHC plans to offer more workshops to promote sustainable habits and healthy eating across the NUS community.
To learn more about NUS Sustainability CONNECT or register your interest for the above events, please click here.
ETH Zurich’s Institute of Cartography and Geoinformation has just celebrated its 100th anniversary with a symposium for experts. A commemorative publication provides in-depth insights into 170 years of cartography at the university and is also aimed at laypeople interested in map art and a well-told (hi)story.
Now, in a first-of-a-kind study, researchers at the University of Cambridge have trialled an unusual solution: a series of regular chats with a humanoid robot.
In work published in the International Journal of Social Robotics, the researchers found that when carers talked regularly to a robot programmed to interact with them, it produced significant positive benefits. These included the carers feeling less lonely and overwhelmed, and being more in touch with their own emotions.
“In other words, these conversations with a social robot gave caregivers something that they sorely lack – a space to talk about themselves,” said first author Dr Guy Laban from Cambridge’s Department of Computer Science and Technology.
He and an international team of colleagues set up a five-week intervention with a group of informal caregivers – those who care for friends or family members without being paid or formally trained to do so.
While many carers find the experience rewarding, supporting those who have significant physical and mental health conditions can also cause them physical and emotional strain.
The researchers found that increased care and family responsibilities, along with shrinking personal space and reduced social engagement, are reasons why informal caregivers often report a tremendous sense of loneliness.
One coping strategy often used by people in emotional distress is self-disclosure and social sharing – for example, talking to friends. But this is not always possible for carers who often face a lack of social support and in-person interaction.
Interested in seeing how the rapidly developing field of social robotics could help address this issue, the researchers set up an intervention for a group of carers.
Those who took part, ranging from parents looking after children with disabilities to older adults caring for a partner with dementia, were able to chat to the humanoid robot Pepper twice a week throughout the five weeks.
The research team wanted to see how carers’ perceptions of the robot evolved over time and whether they saw it as comforting. They were also looking to see how that in turn affected their moods, their feelings of loneliness and stress levels and what the impact was on their emotion regulation.
After discussing everyday topics with Pepper, the carers’ moods improved and they viewed the robot as increasingly comforting, the researchers found. The participants also reported feeling progressively less lonely and stressed.
“Over those five weeks, carers gradually opened up more,” said Laban. “They spoke to Pepper more freely, for longer than they had done at the start, and they also reflected more deeply on their own experiences.
“They told us that chatting to the robot helped them to open up, feel less lonely and overwhelmed, and reconnect with their own emotional needs.”
The research also showed that being able to talk to a social robot could help carers translate their unspoken emotions into meaningful, shared understanding.
For example, after the five-week intervention, carers reported a greater acceptance of their caregiving role, reappraising it more positively and with reduced feelings of blame towards others.
These results highlight the potential of social robots to provide emotional support for individuals coping with emotional distress.
“Informal carers are often overwhelmed by emotional burdens and isolation,” said co-author Professor Emily Cross from ETH Zürich. “This study is – to the best of our knowledge – the first to show that a series of conversations with a robot about themselves can significantly reduce carers’ loneliness and stress.
“The intervention also promoted acceptance of their caregiving role and strengthened their ability to regulate their emotions. This highlights ways in which assistive social robots can offer emotional support when human connection is often scarce.”
People who care informally for sick or disabled friends and relatives often become invisible in their own lives. Focusing on the needs of those they care for, they rarely get the chance to talk about their own emotions or challenges, and this can lead to them feeling increasingly stressed and isolated.
Ece Sar studies mathematics at ETH Zurich. She puts a lot of perseverance and enthusiasm into pursuing her goal of passing on her love of mathematics and inspiring others to do the same.
Data bolsters theory about plunging Catholic Mass attendance
Christy DeSmith
Harvard Staff Writer
6 min read
Surveys tracking religious engagement globally show decline starts after church’s 1960s reforms
In the early 1960s, Catholic bishops from around the world met to update church doctrine for a new era. Reforms made by the Second Ecumenical Council of the Vatican, also known as Vatican II, were meant to foster more inclusive congregations. The most famous allowed priests to celebrate Mass in languages other than Latin.
Academics and certain church insiders have long contended that Vatican II backfired and instead triggered a worldwide decline in Mass attendance. Now an economics working paper has bolstered that claim with new levels of statistical detail.
“Between 1965 and 2010, we find a striking worldwide reduction in Catholic participation in formal services,” said co-author Robert J. Barro, Paul M. Warburg Professor of Economics. “It cumulates to something like 20 percentage points.”
“Between 1965 and 2010, we find a striking worldwide reduction in Catholic participation in formal services.”
Robert J. Barro
The findings were made possible by an innovative new dataset, mined from the International Social Survey Program (ISSP). By pulling answers to survey questions about religious service attendance in childhood, Barro and his collaborators have compiled the first reliable information on long-running trends in 66 countries. For some places, the numbers stretch as far back as the 1920s.
As a result, the project substantially extends knowledge about fluctuating patterns of religious service attendance worldwide. It also uses an “event study” design to examine two historic episodes: Vatican II and the late 1980s/early ’90s collapse of communism in Eastern Europe.
“This paper is a call to action, to tell people about this extraordinary mountain of data that haven’t been exploited,” said co-author Laurence R. Iannaccone, an economics professor at Chapman University. “It can be used to reconstruct rates of church participation across numerous nations — and we should be studying it very closely.”
“It can be used to reconstruct rates of church participation across numerous nations — and we should be studying it very closely.”
Laurence R. Iannaccone
The first-of-its-kind statistical analysis owes a debt to the late Andrew Greeley, a priest and best-selling romance novelist who earned a sociology Ph.D. in 1962. As a staffer at the University of Chicago’s National Opinion Research Center, Greeley worked for years on the nationally representative General Social Survey. According to Iannaccone, who knew and admired Greeley, the iconoclastic scholar eventually convinced the center to add queries on childhood service attendance. Also asked was how frequently parents had attended while respondents were growing up.
At Greeley’s behest, similar language was added to the internationally representative ISSP in 1991. Iannaccone, director of Chapman’s Institute for the Study of Religion, Economics, and Society, was the first to flag the potential of these retrospective questions in an unpublished paper from 2003.
“I thought, wait a minute — this is like a telephone call across time,” recalled Iannaccone, who suspected those living under oppressive regimes would be more likely to answer honestly about the past.
Barro, co-editor of the Quarterly Journal of Economics, was struck by Iannaccone’s idea and encouraged him to prepare his paper for submission. But life got busy, the Chapman professor said, and he never followed through. Eventually Barro enlisted former Harvard research fellow Edgard Dewitte to help complete the analysis using new ISSP data.
The three co-authors culled more than 200,000 responses covering six continents from surveys conducted in 1991, 1998, 2008, and 2018. “So an 80-year-old who answered the 1998 survey could take you all the way back to the 1920s or ’30s,” Barro explained.
A series of exercises ensured the data’s reliability. For starters, the researchers could compare all four sets of ISSP results for a single year. “You might think the memory gets less accurate as time passes,” said Barro, a macroeconomist who often researches religion in partnership with his wife, economics lecturer Rachel M. McCleary. “But we found it was remarkably stable.”
The information also checked out against contemporaneous results generated by more targeted surveys. Greeley himself co-authored a 1987 paper citing Gallup poll evidence of falling numbers in the pews within the U.S. Retrospective data from the ISSP also stood up against four decades of World Values Survey results from 48 countries.
No quantitative analysis could pinpoint why Vatican II was so alienating. Critics have argued that it undermined church hierarchy and sowed division between reformers and traditionalists. For his part, Greeley pegged it on Pope Paul VI overruling a special commission’s recommendation to soften the church’s stance on contraception. “It was this development, more than any other, that shattered the authority structure,” he wrote in 1998.
But the economists can confirm that the trend was unique to Catholicism. They did so by comparing traditionally Catholic countries — where more than half the population identified with the faith in 1900 — to other countries in the sample, including Christian ones dominated by Protestant or Orthodox affiliations. Also studied were individuals with Catholic versus non-Catholic parents.
Monthly religious-service attendance rate across the world
By hemisphere
Source: LOOKING BACKWARD: LONG-TERM RELIGIOUS SERVICE ATTENDANCE IN 66 COUNTRIES
By region
The trendlines show nothing distinct prior to 1965, when Vatican II’s changes were announced. But then a sudden decline in monthly service attendance emerges among Catholics and within Catholic-majority countries including Ireland, Italy, and Spain. Other events, including a series of sex abuse scandals, likely contributed to the sustained effect the researchers quantified over subsequent decades.
“In a sense, the findings just bolster something people have been saying for a long time,” Iannaccone remarked. “We’re now able to see, across a broad range of countries, that Vatican II had the long-run effect of substantially reducing rates of church attendance.”
A less dramatic finding concerns the fall of the Soviet Union and its allies in the Eastern bloc, all hostile to communal worship. The data offered the economists no hints of resurging attendance once these communist governments started toppling in 1989.
“We were pretty surprised by what we found there,” Barro said. “We had bought into the view that the end of communism promoted a revival of religion.”
The dataset also helped enumerate something the co-authors call “the great religious divergence.” The Global North and Global South were statistical lookalikes in 1950, with an average monthly participation rate hovering near 55 percent. By 2010, rates had fallen to 28 percent across the northern hemisphere while numbers were flat across a series of under-surveyed countries within African and Latin America.
Iannaccone hopes this work inspires a new generation working on population surveys.
“By adding just a few questions, we can learn not only what’s going on today in Portugal or Armenia or the Baltic states,” he said. “We can find out a lot about what happened in the distant past, maybe even in places like China where … the government is wary about asking and the people are even more wary about answering.”
Adam Aleksic — aka the ‘Etymology Nerd’ — discusses how social media algorithms are transforming language
Liz Mineo
Harvard Staff Writer
6 min read
Fascinated by words, Adam Aleksic ’23 created a blog to write about their origins when he was in ninth grade. After graduating from Harvard with a concentration in linguistics and government, he became known online as the Etymology Nerd, a self-described influencer who has more than 3 million followers on TikTok, Instagram, and YouTube combined.
In this edited interview, Aleksic talks about his recently published book, “Algospeak: How Social Media is Transforming the Future of Language,” which explores algorithms’ influences on language and culture.
Compared to other developments such as writing, the printing press, and the internet, what is unique about how social media is transforming the way we communicate?
I’m a big believer that the medium is the message. The way the information is being diffused is going to affect how we communicate. For example, with the arrival of writing, there was this big shift away from us telling stories with rhyme and meter. Plato said that writing was going to make us worse at remembering things. With the printing press, information is diffused more quickly, and more people have the ability to be literate, but there are still gatekeepers, which is affecting who gets to tell the story. And then the internet allows us to lose the gatekeepers; anybody can tell the story now, and that’s another paradigm shift in language. Algorithms are a new paradigm shift because the centralization of the internet that occurred in the late 2010s, coupled with how these algorithms push content through personalized recommendation feeds, are changing how we understand the very act of communication.
What role do algorithms play in the evolution of language?
Algorithms are shaping the way we speak. Platforms’ priorities play an important role in organizing and shaping how our language develops. The algorithm pushes more trends, creates more in-groups that then create new language. New trending words are amplified by social media; creators replicate words that they know are going viral, because it helps them go more viral, and then they push the words more into existence. This is the cycle that we’re constantly in. I think it’s because of the algorithm, which amplifies trends, that we’re getting more rapid language change than before. The biggest takeaway from my book is that algorithms are deeply affecting our society right now, and we should be paying attention to them.
“I think it’s because of the algorithm, which amplifies trends, that we’re getting more rapid language change than before.”
When explaining the role of algorithms and influencers in making certain words popular, you write that “algorithms are the culprits, influencers are the accomplices, language is the weapon, and readers are the victims.” Can you unpack this?
When I say algorithms are the culprits, I mean that they are, in this metaphor, responsible for the perpetuation of slang at this speed, and influencers are being accomplices because we’re playing a part. The algorithm doesn’t do anything by itself; it doesn’t come up with the words or spread the words by itself. It’s humans who are doing that, with our own ideas of what the algorithm is or should be, and that pushes the words faster than otherwise. Eventually, those words enter your vocabulary, and that, I guess, makes you the victim.
How do some words created by social media — such as “skibidi” (nonsensical expression), “delulu,” (delusional) and “unalive” (kill or die) — get added to dictionaries?
How did they even get there in the first place is the kind of the question that I’m trying to answer with my book. As I’ve said, trending words are amplified by social media’s algorithms and influencers. How some words are getting added to dictionaries? Lexicographers go by if words have sustained usage; if they’re used at a large enough scale that they’re culturally important words, meaning that a lot of people know what they are; they’ll add them.
What concerns you about the way social media and its algorithms are changing language?
“The fact that we are using these words is an indicator that this culture is influencing us, and it also indicates that the way ideas spread and percolate in the online space can be dangerous.”
As a linguist, I have no concerns because language is the means by which humans connect with one another. As a cultural critic, I’m pretty concerned by the way in which language is more commodified than ever before, and I’m concerned that certain groups are influencing our language more than other groups, like incels. Words that are part of the incel vocabulary like “pilled,” “maxxing,” or “sigma” are very popular. For example, if I like burritos, I can say, “I’m so burrito-pilled,” or if I want to eat more burritos, I can say “I’m burrito-maxxing.” The fact that we are using these words is an indicator that this culture is influencing us, and it also indicates that the way ideas spread and percolate in the online space can be dangerous. Incels are incredibly misogynistic and have a worldview that causes them to dehumanize other people. They have been able to spread their ideology because of the nature of the internet right now. If we pay attention to how language is changing, we should also pay attention to how culture is changing.
As a linguist, I’m very excited to see that language is developing faster than before. To me, language is almost a form of resistance. Every single new meme that emerges is a reactive cultural force to the over-organization of society. This summer, the term “clanker,” which is a speculative slur for artificial intelligence, became very popular. In March, we saw “Italian Brain Rot,” a meme that uses AI subversively to generate ridiculous cartoon characters. Both of these memes create a commentary about our current state of technological progress. A lot of memes and slang words are emerging in reflection to our current cultural moment. There’s something really beautiful about that.
Each year, the U.S. energy industry loses an estimated 3 percent of its natural gas production, valued at $1 billion in revenue, to leaky infrastructure. Escaping invisibly into the air, these methane gas plumes can now be detected, imaged, and measured using a specialized lidar flown on small aircraft.
This lidar is a product of Bridger Photonics, a leading methane-sensing company based in Bozeman, Montana. MIT Lincoln Laboratory developed the lidar's optical-power amplifier, a key component of the system, by advancing its existing slab-coupled optical waveguide amplifier(SCOWA) technology. The methane-detecting lidar is 10 to 50 times more capable than other airborne remote sensors on the market.
"This drone-capable sensor for imaging methane is a great example of Lincoln Laboratory technology at work, matched with an impactful commercial application," says Paul Juodawlkis, who pioneered the SCOWA technology with Jason Plant in the Advanced Technology Division and collaborated with Bridger Photonics to enable its commercial application.
Today, the product is being adopted widely, including by nine of the top 10 natural gas producers in the United States. "Keeping gas in the pipe is good for everyone — it helps companies bring the gas to market, improves safety, and protects the outdoors," says Pete Roos, founder and chief innovation officer at Bridger. "The challenge with methane is that you can't see it. We solved a fundamental problem with Lincoln Laboratory."
A laser source "miracle"
In 2014, the Advanced Research Projects Agency-Energy (ARPA-E) was seeking a cost-effective and precise way to detect methane leaks. Highly flammable and a potent pollutant, methane gas (the primary constituent of natural gas) moves through the country via a vast and intricate pipeline network. Bridger submitted a research proposal in response to ARPA-E's call and was awarded funding to develop a small, sensitive aerial lidar.
Aerial lidar sends laser light down to the ground and measures the light that reflects back to the sensor. Such lidar is often used for producing detailed topography maps. Bridger's idea was to merge topography mapping with gas measurements. Methane absorbs light at the infrared wavelength of 1.65 microns. Operating a laser at that wavelength could allow a lidar to sense the invisible plumes and measure leak rates.
"This laser source was one of the hardest parts to get right. It's a key element," Roos says. His team needed a laser source with specific characteristics to emit powerfully enough at a wavelength of 1.65 microns to work from useful altitudes. Roos recalled the ARPA-E program manager saying they needed a "miracle" to pull it off.
Through mutual connections, Bridger was introduced to a Lincoln Laboratory technology for optically amplifying laser signals: the SCOWA. When Bridger contacted Juodawlkis and Plant, they had been working on SCOWAs for a decade. Although they had never investigated SCOWAs at 1.65 microns, they thought that the fundamental technology could be extended to operate at that wavelength. Lincoln Laboratory received ARPA-E funding to develop 1.65-micron SCOWAs and provide prototype units to Bridger for incorporation into their gas-mapping lidar systems.
"That was the miracle we needed," Roos says.
A legacy in laser innovation
Lincoln Laboratory has long been a leader in semiconductor laser and optical emitter technology. In 1962, the laboratory was among the first to demonstrate the diode laser, which is now the most widespread laser used globally. Several spinout companies, such as Lasertron and TeraDiode, have commercialized innovations stemming from the laboratory's laser research, including those for fiber-optic telecommunications and metal-cutting applications.
In the early 2000s, Juodawlkis, Plant, and others at the laboratory recognized a need for a stable, powerful, and bright single-mode semiconductor optical amplifier, which could enhance lidar and optical communications. They developed the SCOWA (slab-coupled optical waveguide amplifier) concept by extending earlier work on slab-coupled optical waveguide lasers (SCOWLs). The initial SCOWA was funded under the laboratory's internal technology investment portfolio, a pool of R&D funding provided by the undersecretary of defense for research and engineering to seed new technology ideas. These ideas often mature into sponsored programs or lead to commercialized technology.
"Soon, we developed a semiconductor optical amplifier that was 10 times better than anything that had ever been demonstrated before," Plant says. Like other semiconductor optical amplifiers, the SCOWA guides laser light through semiconductor material. This process increases optical power as the laser light interacts with electrons, causing them to shed photons at the same wavelength as the input laser. The SCOWA's unique light-guiding design enables it to reach much higher output powers, creating a powerful and efficient beam. They demonstrated SCOWAs at various wavelengths and applied the technology to projects for the Department of Defense.
When Bridger Photonics reached out to Lincoln Laboratory, the most impactful application of the device yet emerged. Working iteratively through the ARPA-E funding and a Cooperative Research and Development Agreement (CRADA), the team increased Bridger's laser power by more than tenfold. This power boost enabled them to extend the range of the lidar to elevations over 1,000 feet.
"Lincoln Laboratory had the knowledge of what goes on inside the optical amplifier — they could take our input, adjust the recipe, and make a device that worked very well for us," Roos says.
The Gas Mapping Lidar was commercially released in 2019. That same year, the product won an R&D 100 Award, recognizing it as a revolutionary advancement in the marketplace.
A technology transfer takes off
Today, the United States is the world's largest natural gas supplier, driving growth in the methane-sensing market. Bridger Photonics deploys its Gas Mapping Lidar for customers nationwide, attaching the sensor to planes and drones and pinpointing leaks across the entire supply chain, from where gas is extracted, piped through the country, and delivered to businesses and homes. Customers buy the data from these scans to efficiently locate and repair leaks in their gas infrastructure. In January 2025, the Environmental Protection Agency provided regulatory approval for the technology.
According to Bruce Niemeyer, president of Chevron's shale and tight operations, the lidar capability has been game-changing: "Our goal is simple — keep methane in the pipe. This technology helps us assure we are doing that … It can find leaks that are 10 times smaller than other commercial providers are capable of spotting."
At the direction of the U.S. government, the laboratory is also seeking industry transfer partners for a technology that couples SCOWA with a photonic integrated circuit platform. Such a platform could advance quantum computing and sensing, among other applications.
"Lincoln Laboratory is a national resource for semiconductor optical emitter technology," Juodawlkis says.
NUS paid tribute to seven outstanding educators, researchers and professionals for their contributions to the University, Singapore and the global community at the NUS University Awards 2025. The annual Awards recognise members of the NUS community who are trailblazers in their fields and have made sterling contributions to NUS, Singapore, and the global community.
Speaking at the awards ceremony held at the University Cultural Centre on 12 September 2025, NUS President Professor Tan Eng Chye highlighted the significance of this year’s awards, noting that 2025 marks NUS’ 120th anniversary. He thanked all award winners for their contributions and commitment towards building NUS into what it is today.
Acknowledging the central role of AI in shaping the future, Prof Tan emphasised that as a forward-looking university, NUS must not remain on the sidelines while others build capabilities, innovate and ride the AI wave. To this end, NUS is mounting a robust and comprehensive approach to AI that spans its core mission areas of education, research, innovation and enterprise, as well as administration. “We will stay ahead, empower and uplift the whole organisation to fully leverage on AI in all that we do. We are striving forward, with AI and through AI, towards our vision to be a leading global university, shaping the future,” he added.
Top accolade - Outstanding Service Award
This year, the prestigious Outstanding Service Award was conferred on Professor Cheong Koon Hean, Rector of NUS College and Chairman of the Centre for Liveable Cities Advisory Panel under the Ministry of National Development; and Mr Liew Mun Leong, CEO of Gelephu Mindfulness City and Founder and Principal Consultant of Building People Consultancy Pte Ltd., in recognition of their inspiring leadership and dedicated service. Both are accomplished individuals who have made sustained contributions in selflessly serving the University and society.
Professor Cheong Koon Hean
A globally recognised urban planner and architect, Prof Cheong is instrumental in shaping Singapore’s urban form across multiple decades, from strategic land use and heritage conservation to sustainable public housing and smart‑town innovation.
As former CEO of both the Urban Redevelopment Authority and the Housing & Development Board, she redefined the standards of city-making—spearheading transformative plans for Marina Bay and introducing new generations of public housing marked by inclusivity, biophilia and smart technologies.
As a member of the NUS Board of Trustees for 12 years and now the Rector of NUS College, Prof Cheong bridges academia with public policy through her extensive experience in public service. She has also served on the advisory committees of the former NUS School of Design and Engineering, and NUS Faculty of Engineering.
Currently Chairman of the Centre for Liveable Cities Advisory Panel and the Lee Kuan Yew Centre for Innovative Cities, Prof Cheong continues to influence national and international agendas on urban innovation and sustainability. She also advances global urban sustainability through her many advisory and leadership roles with institutes such as the International Federation for Housing and Planning, the Urban Land Institute, and the World Economic Forum’s Real Estate and Urbanisation Council.
Delivering the citation for Prof Cheong’s conferment, Professor Simon Chesterman, NUS Vice Provost (Educational Innovation) and Dean of NUS College said, “Professor Cheong is a stateswoman of city planning and a champion of public service. Her leadership exemplifies the integration of academic excellence, policy insight and civic responsibility.”
On her motivation to serve, Prof Cheong shared, “One of the greatest rewards in serving NUS is to see it grow from strength to strength. My wish is for NUS to not only achieve world-class academic excellence, but to also be the catalyst that encourages curiosity, innovation and risk-taking, nurturing young hearts and minds to think beyond grades and self, aspiring to make a better world.”
Watch this inspiring video on Prof Cheong’s dedicated contributions towards the University and society.
Mr Liew Mun Leong
The veteran corporate leader’sfive-decade career is marked by many valuable contributions to both public infrastructure and private enterprise. Contributing to Singapore’s journey towards becoming a key transportation hub, Mr Liew played a pivotal role in the corporatisation of Changi Airport. He also provided strategic leadership across major infrastructure projects, including Terminal 4, Jewel Changi Airport and Changi East.
In the private sector over a span of 32 years, he led as CEO and was an active board member of several significant companies, many of which were publicly listed. These include CapitaLand, Capital Mall Trust (the first REIT in Singapore) and the Singapore Exchange. He also chaired the boards of Capital Mall Asia, Changi Airport Group, Surbana Jurong Consultancy Holding and Pavilion Gas.
Throughout his extensive career, Mr Liew has shared his expertise in engineering, strategic management and infrastructure development on various boards and panels. In 2024, he was appointed by Bhutan’s King Jigme Khesar Namgyel Wangchuck as CEO of Gelephu Mindfulness City, a special administrative region envisioned as a vibrant economic hub within Bhutan.
An NUS alumnus, Mr Liew was Provost’s Chair Professor (Practice) at the NUS Business School, Faculty of Engineering, and the Lee Kuan Yew School of Public Policy. He also provided expertise on various boards within NUS, including NUS Business School's management advisory board and NUS School of Continuing and Lifelong Education’s inaugural industry advisory board, where he fostered collaborations between academia and industry. Mr Liew was also the Rector of Ridge View Residential College, where he provided mentorship to students.
Delivering the citation for Mr Liew’s conferment, Professor Teo Kie Leong, Dean of the College of Design and Engineering at NUS, lauded his visionary leadership and dedicated services to Singapore, as well as NUS, “Mr Liew’s impact is felt both nationally and internationally, spanning the fields of engineering, infrastructure development, business and public service.”
Sharing his aspirations for the University, Mr Liew said, “Having served as Practice Chair Professor (pro-bono) across Business, Public Policy, and Engineering in NUS, I encourage the university to deepen its engagement and foster stronger collaboration between academia and industry. A university’s ultimate measure lies not only in education but in the practical application of knowledge—preparing graduates to innovate, lead and deliver impactful solutions to real-world challenges.
Watch this inspiring video on Mr Liew’s dedicated contributions towards the University and society.
The honour roll
The NUS University Awards 2025 also recognised the accomplishments of five outstanding educators and researchers.
University Research Recognition Award
Professor Liu Xiaogang, Distinguished Professor in the Department of Chemistry, Faculty of Science, and the Department of Surgery, Yong Loo Lin School of Medicine, was recognised for his groundbreaking research that has placed NUS at the forefront of his area of expertise.
A global leader in nanophotonics and bioimaging, Prof Liu has made transformative contributions to the development of advanced X-ray imaging and optical sensing technologies. His work on all-inorganic perovskite nanoscintillators and high-resolution X-ray luminescence extension imaging has redefined deep-tissue visualisation. This remarkable research has enabled breakthroughs in diagnostics, radiotherapy monitoring and multi-spectral imaging.
Young Researcher Award
Assistant Professor Hou Yi from the Department of Chemical and Biomolecular Engineering, College of Design and Engineering; and Solar Energy Research Institute of Singapore was commended for conducting groundbreaking research with the potential to extend the frontiers of knowledge in his field.
A Presidential Young Professor, Asst Prof Hou has made sterling contributions to the field of renewable energy, particularly in perovskite solar cell technology. He has received global acknowledgement for his work, including his inclusion in the MIT Innovators Under 35 list for the Asia-Pacific region. His recognition as a Clarivate Highly Cited Researcher in the Cross-Field category from 2022 to 2024 further highlights his exceptional impact on his field.
Outstanding Graduate Mentor Award
Professor Yan Ning from the Department of Chemical and Biomolecular Engineering, College of Design and Engineering, was honoured for his excellent and committed mentorship in nurturing the next generation of scholars and thought leaders.
Prof Yan is one of the world’s foremost names in the broad area of catalysis, particularly in the conversion of waste biomass feedstocks into valuable molecules such as amino acids and fuels. Under his guidance, his students have made many impactful research contributions, won prestigious awards at various international scientific competitions, and flourished in diverse and illustrious careers as academics, industry practitioners, and even entrepreneurs.
Outstanding Educator Award
Two faculty members were acknowledged for being exemplary educators who have excelled in engaging and inspiring students in their quest for knowledge:
1) Associate Professor Damith Chatura Rajapakse from the Department of Computer Science, School of Computing
Assoc Prof Rajapakse is a teacher who inspires, a mentor who empowers, and an innovator whose influence is felt far beyond the campus. His teaching evaluations consistently soar above School and Department averages, and his students regularly nominate him for best teaching in every course he conducts. A trailblazer in educational technology, Assoc Prof Rajapakse also has a flagship innovation known as TEAMMATES, a peer-evaluation platform now used by over a million users across more than 1,500 institutions worldwide, which has significantly enhanced collaborative learning.
2) Associate Professor Peter Alan Todd from the Department of Biological Sciences, Faculty of Science; and Tropical Marine Science Institute
An experimental marine ecologist who focuses on organism-environment interactions in nearshore waters, Assoc Prof Todd’s teaching philosophy is “learning by doing science”. He employs problem-based learning to inspire creative thinking and designs authentic assignments that challenge students to use high-level cognitive skills. His courses are designed to encourage knowledge synthesis where students get to produce “ready-for-submission” scientific papers. Over 50 journal papers by undergraduate students as first authors have been published with his guidance.
Read more about the NUS University Awards recipients here and the NUS press release here.
A new MIT initiative known as Day of Design offers free, open-source, hands-on design activities for all classrooms, in addition to professional development opportunities and signature events. The material engages pK-12 learners in the skills they need to solve complex open-ended problems while also considering user, social, and environmental needs. Inspired by Day of AI and Day of Climate, it is a new collaborative effort by the MIT Morningside Academy for Design (MAD) and the WPS Institute, with support from the MIT Museum and the MIT pK-12 Initiative.
“At MIT, design is practiced across departments — from the more obvious ones, like architecture and mechanical engineering, to less apparent ones, like biology and chemistry. Design skills support students in becoming strong collaborators, idea-makers, and human-centered problem-solvers. The Day of Design initiative seeks to share these skills with the K-12 audience through bite-sized, engaging activities for every classroom,” says Rosa Weinberg, who co-led the development of Day of Design and serves as MAD’s K–12 design education lead.
These interdisciplinary resources are designed collaboratively with feedback from teachers and grounded in exciting themes across science, humanities, art, engineering, and other subject areas, serving educators and learners regardless of their experience with design and making. Activities are scaffolded like “grammar lessons” for design education, including classroom-ready slides, handouts, tutorial videos, and facilitation tips supporting 21st century mindsets. All materials will be shared online, enabling educators to use the content as-is, or modify it as needed for their classrooms and other informal learning settings.
Rachel Adams, a former teacher and head of teaching and learning at the WPS Institute, explains, “There can be a gap between open-ended teaching materials and what teachers actually need in their classrooms. Day of Design classroom materials are piloted and workshopped by an interdisciplinary cohort of teachers who make up our Teacher Innovation Fellowship. This collaborative design process allows us to bridge the gap between cutting-edge MIT research with practical student-centered design lessons. These materials represent a new way of thinking that honors both the innovation happening in the labs at MIT and the real-world needs of educators.”
Day of Design also features signature events and a yearly, real-world challenge that brings all the design skills together. It is intended for educators who want ready-to-use design and making activities that connect to their subject areas and mindsets, and for students eager to develop problem-solving skills, creativity, and hands-on experience. Schools and districts looking to engage learners through interdisciplinary, project-based approaches can adopt the program as a flexible framework, while community partners can use it to provide young people with tools and spaces to create.
Cedric Jacobson, a chemistry teacher at Brooke High School in Boston who participated in MAD’s Teacher Innovation Fellowship and contributed to testing the Day of Design curriculum, emphasizes it “provides opportunities for teachers to practice and interact with design principles in concrete ways through multiple lesson structures. This process empowers them to try design principles in model lessons before preparing to use them in their own curriculum.”
Evan Milstein-Greengart, another Teacher Innovation Fellow, describes how “having this hands-on experience changed the way I thought about education. I felt like a kid again — going back to playground learning — and I want to bring that same spirit into my classroom.”
Closing the skills gap through design education
Technologies such as artificial intelligence, robotics, and biotech are reshaping work and society. The World Economic Forum estimates that 39 percent of key job skills will change by 2030. At the same time, research shows student engagement drops sharply in high school, with a third of students experiencing what is often called the “engagement cliff.” Many do not encounter design until college, if at all.
There is a growing need to foster not just technical literacy, but design fluency — the ability to approach complex problems with empathy, creativity, and critical thinking. Design education helps students prototype solutions, iterate based on feedback, and communicate ideas clearly. Studies have shown it can improve creative thinking, motivation, problem-solving, self-efficacy, and academic achievement.
At MIT, design is a way of thinking and creating that spans disciplines — from bioengineering and architecture to mechanical systems and public policy. It is both creative and analytical, grounded in iteration, user input, and systems thinking. Day of Design reflects MIT’s “mens et manus” (“mind and hand”) motto and extends the tools of design to young learners and educators.
“The workshops help students develop skills that can be applied across multiple subject areas, using topics that draw context from MIT research while remaining exciting and accessible to middle and high school students,” explains Weinberg. “For example, ‘Cosmic Comfort,’ one of our pilot workshops, was inspired by MIT's Space Architecture course (MAS.S66/4.154/16.89). It challenges students to consider how you might make a lunar habitat feel like home, while focusing on developing the crucial design skill of ideation — the ability to generate multiple creative solutions.”
Building on an MIT legacy
Day of Design builds on the model of Day of AI and Day of Climate, two ongoing efforts by MIT RAISE and the MIT pK-12 Initiative. All three initiatives share free, open-source activities, professional development materials, and events that connect MIT research with educators and students worldwide. Since 2021, Day of AI has reached more than 42,000 teachers and 1.5 million students in 170 countries and all 50 U.S. states. Day of Climate, launched in March 2025, has already recorded over 50,000 website visitors, 300 downloads of professional development materials, and an April launch event at the MIT Museum that drew 200 participants.
“Day of Design builds on the spirit of Day of AI and Day of Climate by inviting young people to engage with real-world challenges through creative work, meaningful collaboration, and deep empathy for others. These initiatives reflect MIT’s commitment to hands-on, transdisciplinary learning, empowering future young leaders not just to understand the world, but to shape it,” says Claudia Urrea, executive director for the pK–12 Initiative at MIT Open Learning.
Kicking off with connection
“Learning and creating together in person sparks the kind of ideas and connections that are hard to make any other way. Collective learning helps everyone think bigger and more creatively, while building a more deeply connected community that keeps that growth alive,” observes Caitlin Morris, PhD student in Fluid Interfaces, a 2024 MAD Design Fellow, and co-organizer of Day of Design: Connect, which will kick off Day of Design on Sept. 25.
Following the launch, the first set of classroom resources will be introduced during the 2025–26 school year, starting with activities for grades 7–12. Additional resources for younger learners, along with training opportunities for educators, will be added over time. Each year, new design skills and mindsets will be incorporated, creating a growing library of activities. While initial events will take place at MIT, organizers plan to expand programming globally.
Teacher Innovation Fellow Jessica Toupin, who piloted Day of Design activities in her math classroom, reflects on the impact: “As a math teacher, I don’t always get to focus on design. This material reminded me of the joy of learning — and when I brought it into my classroom, students who had struggled came alive. Just the ability to play and build showed me they were capable of so much more.”
“Cosmic Comfort,” one of Day of Design's pilot workshops, was inspired by MIT's Space Architecture course (MAS.S66/4.154/16.89). It was first offered during the 2024 Cambridge Science Festival.
The National University of Singapore (NUS) today recognised seven outstanding individuals who distinguished themselves with their achievements and contributions in the areas of education, research, mentorship and service to the University, Singapore and the global community. They are recipients of the annual NUS University Awards which recognise educators, researchers and professionals for their exceptional contributions to the University, Singapore and the global community.
This year, the prestigious Outstanding Service Award was conferred on Professor Cheong Koon Hean, Rector of NUS College and Chairman of the Centre for Liveable Cities Advisory Panel, Ministry of National Development; and Mr Liew Mun Leong, CEO of Gelephu Mindfulness City and Founder and Principal Consultant of Building People Consultancy Pte Ltd., in recognition of their inspiring leadership and dedicated service. Both are accomplished individuals who have made sustained contributions in selflessly serving the University and society.
NUS President Professor Tan Eng Chye said, “This year’s University Awards recipients have won their award in a special year. 2025 is a significant year for NUS as we celebrate our 120th anniversary. We are proud of the profound dedication of our winners to pursuing excellence in education, research and service at the highest levels, raising the standards and reputation of NUS as a leading global university, shaping the future. The impact of their work is far-reaching, and their contributions have left an indelible mark that inspires us to scale greater heights. I extend my heartiest congratulations to all award recipients.”
Besides the Outstanding Service Award, the other award categories are University Research Recognition Award, Young Researcher Award, Outstanding Graduate Mentor Award and Outstanding Educator Award.
Outstanding Service Award
Professor Cheong Koon Hean
A globally recognised urban planner and architect, Professor Cheong Koon Hean is instrumental in shaping Singapore’s urban form across multiple decades, from strategic land use and heritage conservation to sustainable public housing and smart‑town innovation.
As former CEO of both the Urban Redevelopment Authority and the Housing & Development Board, she redefined the standards of city-making—spearheading transformative plans for Marina Bay and introducing new generations of public housing marked by inclusivity, biophilia and smart technologies.
As a member of the NUS Board of Trustees for 12 years and now the Rector of NUS College, Professor Cheong bridges academia with public policy through her extensive experience in public service. She has also served on the advisory committees of the former NUS School of Design and Engineering, and NUS Faculty of Engineering.
Currently Chairman of the Centre for Liveable Cities Advisory Panel, Ministry of National Development and the Lee Kuan Yew Centre for Innovative Cities, Professor Cheong continues to influence national and international agendas on urban innovation and sustainability. She also advances global urban sustainability through her many advisory and leadership roles with institutes such as the International Federation for Housing and Planning, the Urban Land Institute, and the World Economic Forum’s Real Estate and Urbanisation Council.
Mr Liew Mun Leong
Mr Liew Mun Leong’s five-decade career as a veteran corporate leader is marked by many valuable contributions to both public infrastructure and private enterprise. Contributing to Singapore’s journey towards becoming a key transportation hub, he played a pivotal role in the corporatisation of Changi Airport. He also provided strategic leadership across major infrastructure projects, including Terminal 4, Jewel Changi Airport and Changi East.
In the private sector over a span of 32 years, he led as CEO and was an active board member of several significant companies, many of which were publicly listed. These include CapitaLand, Capital Mall Trust (the first REIT in Singapore) and the Singapore Exchange. He also chaired the boards of Capital Mall Asia, Changi Airport Group, Surbana Jurong Consultancy Holding and Pavilion Gas.
Throughout his extensive career, Mr Liew has shared his expertise in engineering, strategic management and infrastructure development on various boards and panels. In 2024, he was appointed by Bhutan’s King Jigme Khesar Namgyel Wangchuck as CEO of Gelephu Mindfulness City, a special administrative region envisioned as a vibrant economic hub within Bhutan.
An NUS alumnus, Mr Liew was Provost’s Chair Professor (Practice) at the NUS Business School, Faculty of Engineering, and the Lee Kuan Yew School of Public Policy. He also provided expertise on various boards within NUS, including NUS Business School's management advisory board and NUS School of Continuing and Lifelong Education’s inaugural industry advisory board, where he fostered collaborations between academia and industry. Mr Liew was also the Rector of Ridge View Residential College, where he provided mentorship to students.
Five exemplary educators and researchers honoured
The NUS University Awards also celebrated the accomplishments of five outstanding academics and researchers:
University Research Recognition Award
1) Professor Liu Xiaogang, Distinguished Professor in the Department of Chemistry, Faculty of Science; and the Department of Surgery, Yong Loo Lin School of Medicine
A global leader in nanophotonics and bioimaging, Prof Liu has made transformative contributions to the development of advanced X-ray imaging and optical sensing technologies. His work on all-inorganic perovskite nanoscintillators and high-resolution X-ray luminescence extension imaging has redefined deep-tissue visualisation. This remarkable research has enabled breakthroughs in diagnostics, radiotherapy monitoring and multi-spectral imaging.
Young Researcher Award
2) Assistant Professor Hou Yi,Presidential Young Professor in the Department of Chemical and Biomolecular Engineering, College of Design and Engineering; and Solar Energy Research Institute of Singapore
Asst Prof Hou has made groundbreaking contributions to the field of renewable energy, particularly in perovskite solar cell technology. He has received global acknowledgement for his work, including his inclusion in the MIT Innovators Under 35 list for the Asia-Pacific region. His recognition as a Clarivate Highly Cited Researcher in the Cross-Field category from 2022 to 2024 further highlights his exceptional impact on his field.
Outstanding Graduate Mentor Award
3) Professor Yan Ning from the Department of Chemical and Biomolecular Engineering, College of Design and Engineering
Prof Yan is one of the world’s foremost names in the broad area of catalysis, particularly in the conversion of waste biomass feedstocks into valuable molecules such as amino acids and fuels. Under his guidance, his students have made many impactful research contributions, won prestigious awards at various international scientific competitions, and flourished in diverse and illustrious careers as academics, industry practitioners, and even entrepreneurs.
Outstanding Educator Award
4) Associate Professor Damith Chatura Rajapakse from the Department of Computer Science, School of Computing
Assoc Prof Rajapakse is a teacher who inspires, a mentor who empowers, and an innovator whose influence is felt far beyond the campus. His teaching evaluations consistently soar above School and Department averages, and his students regularly nominate him for best teaching in every course he conducts. A trailblazer in educational technology, Assoc Prof Rajapakse also has a flagship innovation known as TEAMMATES, a peer-evaluation platform now used by over a million users across more than 1,500 institutions worldwide, which has significantly enhanced collaborative learning.
5) Associate Professor Peter Alan Todd from the Department of Biological Sciences, Faculty of Science; and Tropical Marine Science Institute
An experimental marine ecologist who focuses on organism-environment interactions in nearshore waters, Assoc Prof Todd’s teaching philosophy is “learning by doing science”. He employs problem-based learning to inspire creative thinking, and designs authentic assignments that challenge students to use high-level cognitive skills. His courses are designed to encourage knowledge synthesis where students get to produce “ready-for-submission” scientific papers. Over 50 journal papers by undergraduate students as first authors have been published with his guidance.
A new innovation from Cornell researchers lowers the energy use needed to power artificial intelligence – a step toward shrinking the carbon footprints of data centers and AI infrastructure.
The University of Cambridge is proud to support the Award, recognised as one of the UK’s most significant literary prizes for a single short story. The prize aims to expand opportunities for British writers, readers and publishers of the short form, and to honour the country’s finest exponents of the genre. Cambridge staff, students and researchers contribute to the partnership, which also offers unique professional development opportunities for PhD students through a BBC shadowing scheme.
The 2025 shortlist
This year’s shortlist has been praised for its ‘intimate,’ ‘elegant’ and ‘nuanced’ explorations of relationships, community and the specificities of place:
‘Yair’ by Emily Abdeni-Holman
‘You Cannot Thread a Moving Needle’ by Colwill Brown
‘Little Green Man’ by Edward Hogan
‘Two Hands’ by Caoilinn Hughes
‘Rain, a History’ by Andrew Miller
Set in locations from Derbyshire and Doncaster to Jerusalem and County Kildare, the stories explore ‘self-contained’ worlds often inspired by personal memories and experiences, from the complexities of marriage, to the mysteries of survival in crisis; from newly formed inter-generational bonds, to the quiet tension between people and place, each reveals the short story’s ‘unparalleled’ power to reflect ‘the times we are living through.’
The five shortlisted stories will be broadcast on BBC Radio 4 from 15 - 19 September and made available on BBC Sounds. They will also appear in an anthology published by Comma Press.
The winner will receive £15,000, with £600 awarded to each of the other shortlisted writers. The announcement will be made live on Front Row on Tuesday 30 September 2025.
A BBC and Cambridge partnership
Cambridge's long-term partnership with both the BBC National Short Story Award and the BBC Young Writers’ Award, is led by Dr Bonnie Lander Johnson (Fellow and Associate Professor in English at Downing and Newnham Colleges) and Dr Elizabeth Rawlinson-Mills (University Associate Professor in the Faculty of Education and Fellow of Robinson College).
Dr Lander Johnson said:
“The National Short Story Awards continue to be the largest and most prestigious awards of their kind in the UK. I am proud to represent the University on this partnership; I believe we have a role to play in supporting the production of literary excellence in Britain. Storytelling is an essential human impulse through which we reflect on our changing world, inspire younger generations, and make sense of our collective and individual lives. It is essential that Cambridge University remains part of such crucial cultural work. Who are we if we cannot tell our stories?”
Dr Rawlinson-Mills added:
“The short story as a form is intense. Compact, powerful, challenging – for the writer and, often, for the reader. Each year the National Short Story Award brings us into contact with some of the most exciting voices in English writing, and over the past twenty years it’s been a privilege to see the ways in which winning this prize has boosted writers’ profiles and brought their work to new audiences through the broadcasts on R4. Every year there are new reasons to feel that we need stories more than ever. I am very proud of the part the University of Cambridge continues to play in supporting the prize and therefore supporting new writing.”
Cambridge PhD students are also benefitting from the BBC Partnership Shadowing Scheme, which allows arts and social sciences researchers at Cambridge to work with BBC teams on programming around the Awards, developing valuable skills in cultural engagement and public communication.
About the Award
First presented in 2006, the BBC National Short Story Award has honoured leading and emerging voices including Sarah Hall, Cynan Jones, Ingrid Persaud, and Saba Sams. Alumni of the shortlist include Zadie Smith, Hilary Mantel, Tessa Hadley and Caleb Azumah Nelson.
The 2025 judging panel is chaired by Di Speirs MBE, joined by William Boyd, Lucy Caldwell, Ross Raisin and Kamila Shamsie.
The BBC Young Writers’ Award with Cambridge University, now in its 11th year, also continues to inspire writers aged 14 - 18. The shortlist will be announced on Sunday 14 September, with the winner also revealed on 30 September.
The shortlist for the 2025 BBC National Short Story Award with Cambridge University was announced last night, Thursday 11 September, on BBC Radio 4’s Front Row, as the prestigious prize celebrates its 20th anniversary.
I am proud to represent the University on this partnership; I believe we have a role to play in supporting the production of literary excellence in Britain.
ETH Professor Sonia Seneviratne is the first Swiss citizen to receive the prestigious German Environmental Award bestowed by the German Federal Environmental Foundation. The climate researcher shares the prize, endowed with a total of 500,000 euros, with a company from Gelsenkirchen.
Despite changes to the HM Treasury Green Book to encourage forms of valuation other than economic, local authorities are struggling to capture social, environmental and cultural value in a way that feeds into their systems and processes. The Public Map Platform project aims to make this easy by spatialising data so that it can be used as a basis for targeted hyperlocal action for a green transition.
Professor Flora Samuel said: “Climate change cannot be addressed without revealing and tackling the inequalities within society and where they are happening. Only when we know what is happening where, and how people are adapting to climate change can we make well informed decisions.”
“The aim of this pragmatic project is to create a Public Map Platform that will bring together multiple layers of spatial information to give a social, environmental, cultural and economic picture of what is happening in a neighbourhood, area, local authority, region or nation.”
In 2023, the project was awarded one of four new £4.625 million Green Transition Ecosystem grants. The second phase funding will enable to project to build on its impacts and benefits.
Flora Samuel’s team is presenting to the Welsh Government at the Sennedd in Cardiff on 30 September 2025. They have engaged with hundreds of children on the Isle of Anglesey and will be bringing the Public Map Platform to Cambridge working with the team in The Cambridge Room.
Green Transition Ecosystems (GTEs) are large-scale projects that focus on translating the best design-led research into real-world benefits. Capitalising on clusters of design excellence, GTEs address distinct challenges posed by the climate crisis including, but not limited to, realising net zero goals.
The Public Map Platform is addressing the following overarching aims of the Green Transitions Ecosystem call: measurable, green transition-supportive behavioural change across sectors and publics; design that fosters positive behavioural change in support of green transition goals, including strategy and policy; region-focused solutions for example the infrastructure supporting rural communities and, lastly, designing for diversity.
To meet these aims the project will deliver a baseline model mapping platform for decision making with communities for use by Local Authorities (LoAs) across the UK and beyond. To do this a pilot platform will be made for the Isle of Anglesey to help the LoA measure its progress towards a green transition and fulfilment of the Future Generations Wales Act in a transparent and inclusive way.
The Isle of Anglesey/Ynys Môn in North Wales was chosen as the case study for this project largely because it is a discrete geographical place that is rural, disconnected and in decline, with a local authority that has high ambitions to reinvent itself as a centre of sustainable innovation, to be an 'Energy Island’ at the centre of low-carbon energy research and development. The bilingual context of Anglesey provides a particular opportunity to explore issues around multilingual engagement, inclusion and culture – a UK-wide challenge.
The project, a collaboration with the Wales Institute of Social and Economic Research and Data (Wiserd) at Cardiff University and Wrexham Glyndwr University as well as several other partners is supported by the Welsh Government and the Future Generations Commission in Wales who are investigating ways to measure, and spatialise, attainment against the Well-being of Future Generations (Wales) Act (2015), a world-leading piece of sustainability legislation.
The Public Map Platform will offer a range of well designed and accessible information to communities, local authorities and policy makers alike, as well as opportunities to contribute to the maps. The map layers will constantly grow with information and sophistication, reconfigured according to local policy and boundaries. And crucially, they will be developed and monitored with and by a representative cross section of the local community.
An accessible website will be designed as a data repository tailored to a range of audiences, scalable for use across the UK. Social, cultural and environmental map layers will be co-created with children and young people to show, for instance, where people connect, engage with cultural activities and do small things to adapt to climate change.
The community-made data will be overlaid onto existing census and administrative data sets to build a baseline Future Generations map of the Isle of Anglesey. The layers can be clustered together to measure the island’s progress against the Act but can also be reconfigured to other kinds of measurement schema. In this way the project will offer a model for inclusive, transparent and evidence based planning, offering lessons for the rest of the UK and beyond.
This award is part of the Future Observatory: Design the Green Transition programme, the largest publicly funded design research and innovation programme in the UK. Funded by AHRC in partnership with Future Observatory at the Design Museum, this £25m multimodal investment aims to bring design researchers, universities, and businesses together to catalyse the transition to net zero and a green economy.
Christopher Smith, Executive Chair of the Arts and Humanities Research Council said:
“Design is a critical bridge between research and innovation. Placing the individual act of production or consumption within the context of a wider system of social and economic behaviour is critical to productivity, development and sustainability.
"That’s why design is the essential tool for us to confront and chart a path through our current global and local predicaments, and that’s why AHRC has placed design at the heart of its strategy for collaboration within UKRI.
"From health systems to energy efficiency to sustainability, these four Green Transition Ecosystem projects the UK are at the cutting edge of design, offering models for problem solving, and will touch on lives right across the UK.”
A team led by Professor Flora Samuel from Cambridge’s Department of Architecture has been awarded a further Green Transition Ecosystem grant of £3.12 million by the Arts and Humanities Research Council (AHRC) to create a Public Map Platform to chart the green transition on the Isle of Anglesey/Ynys Môn.
Climate change cannot be addressed without revealing and tackling the inequalities within society and where they are happening
Flora Samuel
The Public Map Platform’s outdoor engagement activities, Lle Llais, on Anglesey
The garden of the Kunsthaus Zurich now features an unusual sight: Zardoz, an eight-metre-high head sculpture that you can not only view but also enter and climb.
As part of NUS Global Relations Office’sStudy Trips for Engagement and EnRichment (STEER) programme, 15 NUS students from the College of Alice & Peter Tan (CAPT) embarked on a 10-day overseas trip to multiple municipalities and districts in Timor-Leste, including Dili, Ermera and Metinaro. Led by Dr Toh Tai Chong, Director of Residential Life and Resident Fellow at CAPT, the trip centred on the themes of youth and community development. Despite being the youngest Southeast Asian country, independent since 2002, Timor-Leste offered students the chance to gain deep insights into the country’s agricultural, educational and community leadership systems through engagement with various stakeholders, communities and organisations.
Agriculture as the backbone of Timor-Leste’s economy
Coffee is one of Timor-Leste’s top export crops, deeply rooted in the country’s history due to Portuguese colonisation. Today it remains a vital part of the nation’s economy and identity, and the team had the chance to visit various coffee production sites and local cafes, experiencing firsthand the full harvesting process.
Alongside coffee production, the Timorese government is also positioning agriculture as one of the country’s key exports by tapping on the country’s land space and natural potential for agricultural production. To support this push, universities like Universidade Nacional Timor Lorosa'e and University of Peace, are training youth in agriculture.
Through conversations with university students and professors, the CAPT team discovered the importance Timorese youths placed on agricultural education as a pathway to improve their country’s farming industry. Many have aspirations to work abroad, particularly in places like Australia, in order to bring back new agricultural techniques to benefit their communities.
However, it has been challenging with the country’s long dry seasons lasting up to nine months, and growing concerns about climate change, resource depletion and pollution that make environmental conservation even more critical. Non-governmental organisations (NGOs) like Permatil hope to address this by equipping young people with knowledge about permaculture and water restoration, empowering them to implement sustainable agricultural practices in their own communities.
Education as a means to engage and empower youths
To learn more about the youths and the local education system, the team visited different educational institutions, from pre-schools to universities. They were able to gather deeper insights on the ground about the education scene in relation to their pre-trip seminars conducted by professionals who shared their personal and professional experiences working in Timor-Leste. One notable takeaway was how the importance of agriculture was actively communicated to young people. Many universities offer agricultural courses aimed at equipping the next generation with the skills needed to sustain and advance the industry so vital to the country’s development.
On a more personal level, the team was deeply moved by conversations with students, teachers and staff, many of whom shared that they aspired to become educators in order to give back to society. Clarence, a Year Three CAPTain from the Faculty of Science, said, “It is heartening to see the passion of these students and staff — not just in educating us, but in their eagerness to connect. Despite the challenges they face, it was truly inspiring to witness their determination and intentionality they display in creating meaning and value for themselves and their communities.”
Uniting communities through community leadership
Another key theme which surfaced during the trip was the quiet strength of community leadership. Many of the community leaders the students met shared about their desire to give back and empower fellow Timorese by providing them with jobs, training and skills to uplift their families. Driven by an entrepreneurial spirit and a deep commitment to their communities, these leaders have creatively leveraged existing resources to advocate for a range of causes – from sustainability and women’s rights, to coffee production and local spirits – while carefully balancing support from NGOs with their pursuit of self-reliance. When visiting one of the NGO, Alola Foundation, Year 2 CAPTain and Business student Lynette Saw (pictured in the left photo below) was inspired by the stories that were shared at the visit and how it only takes a collective vision to spark meaningful, intergenerational change for women in Timor Leste.
Given the constraints faced by the local government, the students saw how community leadership plays an important role in Timor-Leste’s development, and how local initiatives and grassroots leaders complement the government’s work by helping to drive social progress and enhance community resilience. Year Three CAPTain from the College of Design and Engineering Tan When Young, not only held deep respect for the community leaders but developed a newfound respect for the meaningful work that the NGOs do. “Many of the NGOs truly went above and beyond in the work to support the local communities. While it is definitely not easy and there are a lot of challenges, it is heartening to witness the fruits of their labour.”
The trip left the team cognisant of the challenges this young nation faces, yet deeply hopeful about its untapped potential. With the right support systems and sustained investment in the local communities, they believe that Timor-Leste has the potential to flourish – grounded in unity and driven by a shared vision and strong sense of national identity.
Photo illustration by Liz Zonarich/Harvard Staff; Barbara Westman illustrations courtesy of Fritz Westman
Sarah Lamodi
Harvard Correspondent
8 min read
Before New Yorker covers, Barbara Westman created colorful visions of campus as Gazette’s first staff artist
On Sept. 15, 1978 — back when The Harvard Gazette still had a print edition — it was distributed as usual around campus and to the mailboxes of subscribers across the country. But something about this issue was different. In a first, the front page was illustrated.
Widener Library’s steps are immediately recognizable in the illustration, but to local readers at the time the exuberant style of the drawing would have been just as familiar. It was that of Barbara Westman, an artist with close ties to Harvard who, throughout the 1960s and ’70s, had illustrated four books about Boston and Cambridge. She became the first staff artist for the Gazette in 1977, illustrating full pages for holidays and anniversaries, half pages for charity drives and campus notices, and dozens of spot drawings depicting Harvard Yard and Harvard Square. After moving to the Big Apple in 1980 she began working for The New Yorker, ultimately drawing 17 covers and more than 100 spot illustrations over 13 years with the publication.
“Barbara was the kind of person people gravitated to,” said Fritz Westman about his aunt, who died last year at age 95. He recalls her as a stylish local celebrity in a raccoon coat and school hat zooming around in a little red Volkswagen. “She was not the conservative Bostonian. She had a method for being herself, like exclamation marks in large bubble letters. You could tell there was something about her that was just different and fun.”
Westman’s drawings for the Gazette were recently rediscovered as part of an ongoing project digitizing the publication’s physical issues. A selection of her Gazette works are published here online for the first time.
Illustration of Harvard Square in the rain, 1979.
Before joining the Gazette, Westman had worked at Harvard from 1967 to 1977 as an archaeological draftsman for the Peabody Museum. But her connection to Harvard goes back further, said her nephew.
She was born in 1929 in Boston to Frederick W. Westman, an architect, and Eleanor Proctor Furminger, a concert pianist. Buildings by her father’s firm, Whelan & Westman, can be found around Boston and Cambridge — including in Harvard Yard. The firm was employed as a subcontractor on parts of Dunster House and Lowell House, both built in 1930. She started drawing at age 2, said her nephew, and her parents’ creativity was foundational to her becoming an artist, as was a love of comic strips like “Little Orphan Annie,” “Joe Palooka,” and “Moon Mullins.” After attending Goucher College in Maryland and completing her postgraduate art studies in Munich, Westman returned to Boston in 1957 to attend the School of the Museum of Fine Arts, where she was first in her class.
Local landmarks including City Hall Plaza, Commonwealth Avenue, and the Longfellow Bridge fill the pages of Westman’s first book, “The Bean and the Scene: Drawings of Boston,” published in 1969. Her husband, the late philosopher and art critic Arthur Danto, recalled the local impact of her work in an essay introducing one of her art shows: “Posters, taken from the books, were displayed in shop windows all over Cambridge and Boston, and everyone owned copies of the books.” Westman would publish six more illustrated books between 1970 and 1991, three depicting Greater Boston.
Illustration of Morgan Gate and Widener Library entrance, 1979.
Steps of Widener Library Illustration, 1978.
Illustration of Harvard Square MBTA construction, 1979.
Illustration of students in library for midyear exams, 1979.
Illustration of tourists and the John Harvard statue, 1979.
Westman (and her pet parakeet) lived in an apartment near Harvard Square while she worked for the University. Her nephew Fritz Westman would frequently visit as a child, and they would wander the Square, browsing shops like Design Research on Brattle Street, pointing out what they found most interesting. When Westman visited her nephew’s family in Rockport, Massachusetts, she’d invite him to “edit” her books. Fritz Westman said it was an excuse to spend time with him — he was then around 10 years old — and a way to see how a child reacted to her work. “It was just like having a friend who was my same age. I looked up to her; it was like walking around with a cartoon character.”
Harvard also figures prominently in Westman’s books. A 1970 Harvard Crimson review of “The Beard and the Braid: Drawings of Cambridge” observes, “When Barbara Westman says ‘Cambridge’ she is really talking about Harvard,” later adding, “Miss Westman may live in Cambridge, but she looks at it through the eyes of a tourist.” The reviewer was on to something: In a letter to a friend after living in Europe for four years, Westman wrote, “Now I look at America as a tourist sees America. I see EVERYTHING!”
Many considered the way Westman viewed the world to be one of the most compelling aspects of her art. Danto once wrote: “Barbara had created a visual myth of Boston. The world she gave was her own. In truth, her drawings were her mind, given an external embodiment.”
Barbara and her husband.
Photo by Anne Hall Elser
In 1980, Westman moved to New York City to marry Danto. Despite the change of scenery, her process remained the same. While living in Europe, Barbara drew perched on rooftops. In Boston, her nephew remembers her sitting in a snowbank in the Beacon Hill neighborhood, meticulously recording every brick of a building. And in New York, she’d people-watch on the bus or on walks along Broadway. The city was her studio space. Once she captured what she saw in notes and drawings, Westman wrote in a letter to a friend, it was then time to go home to her “second studio, where it is quiet, and think.”
In an artist’s statement from the early 2000s, Westman wrote, “I like to put the image down right away — very directly — and not change it.” For this reason, she preferred paper, ink, and acrylic over canvas.
“There were a lot of people in the contemporary art world who didn’t view her work as legitimate because it was mostly on paper,” said Fritz Westman, also an artist. And yet, “She was not somebody who was interested in trying to prove herself to the art world. When you’re making work that you love, you really don’t care about what anybody else is interested in. You just go and do your own thing.”
Barbara and her nephew, Frtiz Westman.
Photo courtesy of Fritz Westman
Westman’s work can be found in public and private collections in the U.S. and abroad, including the Harvard Art Museum; the Philadelphia Museum of Art; the Museum of Fine Arts, Boston; the Boston Athenaeum; Kjarvalsstaðir in Reykjavik, Iceland; and the Galerie Mantoux-Gignac in Paris. But her legacy doesn’t just survive in exhibits and the memories of loved ones — but on the very streetscapes she captured.
In the Leavitt & Peirce tobacco shop on Massachusetts Avenue in Cambridge, a dusty copy of “The Beard and the Braid” sits on a shelf behind the register, clipped open to the page with an illustration of their storefront. And in Hillside Cleaners on nearby Brattle, an illustration of Brattle Street that Westman gifted to the store’s original owner in the late ’60s hangs framed above the front desk. Maureen German, a longtime employee at the cleaners, points out a street sign in the illustration that reads “NO NOT HERE.”
“To park in Harvard Square was a pain in the behind,” said German. “In all her paintings, when you look at them, there’s something crazy in it like that. She had quite the sense of humor.”
Westman passed that creative instinct, drive, and humor to her nephew. His early days surrounded by creatives in Rockport, and the support of his aunt and uncle later in life, gave him the push he needed to leave his undergraduate business program for museum school. Since the ’80s, he’s been a sculptor and collaborator on art restorations. Now, he hopes to share with family and friends what his aunt shared with him, and it starts at his Pennsylvania home.
“My home is like a little self-portrait. There’s Barbara’s works, there’s my works, there’s my grandfather’s things,” but, most importantly, he said, it’s becoming “a cousin to Barbara’s apartment in New York City” — a space for laughter, creativity, and the everyday sights and sounds that inspired her.
“In our very busy lives,” Barbara wrote in “The Beard and the Braid,” “we don’t stop and stare at or wonder at some really beautiful things. Artists do. Children do. I guess it’s up to artists and photographers and children to help busy people see what’s around them.”
Researchers at the Antimicrobial Resistance (AMR) interdisciplinary research group of the Singapore-MIT Alliance for Research and Technology (SMART), MIT’s research enterprise in Singapore, have developed a powerful tool capable of scanning thousands of biological samples to detect transfer ribonucleic acid (tRNA) modifications — tiny chemical changes to RNA molecules that help control how cells grow, adapt to stress, and respond to diseases such as cancer and antibiotic‑resistant infections. This tool opens up new possibilities for science, health care, and industry — from accelerating disease research and enabling more precise diagnostics to guiding the development of more effective medical treatments for diseases such as cancer and antibiotic-resistant infections.
For this study, the SMART AMR team worked in collaboration with researchers at MIT, Nanyang Technological University in Singapore, the University of Florida, the University at Albany in New York, and Lodz University of Technology in Poland.
Addressing current limitations in RNA modification profiling
Cancer and infectious diseases are complicated health conditions in which cells are forced to function abnormally by mutations in their genetic material or by instructions from an invading microorganism. The SMART-led research team is among the world’s leaders in understanding how the epitranscriptome — the over 170 different chemical modifications of all forms of RNA — controls growth of normal cells and how cells respond to stressful changes in the environment, such as loss of nutrients or exposure to toxic chemicals. The researchers are also studying how this system is corrupted in cancer or exploited by viruses, bacteria, and parasites in infectious diseases.
Current molecular methods used to study the expansive epitranscriptome and all of the thousands of different types of modified RNA are often slow, labor-intensive, costly, and involve hazardous chemicals, which limits research capacity and speed.
To solve this problem, the SMART team developed a new tool that enables fast, automated profiling of tRNA modifications — molecular changes that regulate how cells survive, adapt to stress, and respond to disease. This capability allows scientists to map cell regulatory networks, discover novel enzymes, and link molecular patterns to disease mechanisms, paving the way for better drug discovery and development, and more accurate disease diagnostics.
Unlocking the complexity of RNA modifications
SMART’s open-access research, recently published in Nucleic Acids Research and titled “tRNA modification profiling reveals epitranscriptome regulatory networks in Pseudomonas aeruginosa,” shows that the tool has already enabled the discovery of previously unknown RNA-modifying enzymes and the mapping of complex gene regulatory networks. These networks are crucial for cellular adaptation to stress and disease, providing important insights into how RNA modifications control bacterial survival mechanisms.
Using robotic liquid handlers, researchers extracted tRNA from more than 5,700 genetically modified strains of Pseudomonas aeruginosa, a bacterium that causes infections such as pneumonia, urinary tract infections, bloodstream infections, and wound infections. Samples were enzymatically digested and analyzed by liquid chromatography-tandem mass spectrometry (LC-MS/MS), a technique that separates molecules based on their physical properties and identifies them with high precision and sensitivity.
As part of the study, the process generated over 200,000 data points in a high-resolution approach that revealed new tRNA-modifying enzymes and simplified gene networks controlling how cells respond and adapt to stress. For example, the data revealed that the methylthiotransferase MiaB, one of the enzymes responsible for tRNA modification ms2i6A, was found to be sensitive to the availability of iron and sulfur and to metabolic changes when oxygen is low. Discoveries like this highlight how cells respond to environmental stresses, and could lead to future development of therapies or diagnostics.
SMART’s automated system was specially designed to profile tRNA modifications across thousands of samples rapidly and safely. Unlike traditional methods, this tool integrates robotics to automate sample preparation and analysis, eliminating the need for hazardous chemical handling and reducing costs. This advancement increases safety, throughput, and affordability, enabling routine large-scale use in research and clinical labs.
A faster and automated way to study RNA
As the first system capable of quantitative, system‑wide profiling of tRNA modifications at this scale, the tool provides a unique and comprehensive view of the epitranscriptome — the complete set of RNA chemical modifications within cells. This capability allows researchers to validate hypotheses about RNA modifications, uncover novel biology, and identify promising molecular targets for developing new therapies.
“This pioneering tool marks a transformative advance in decoding the complex language of RNA modifications that regulate cellular responses,” says Professor Peter Dedon, co-lead principal investigator at SMART AMR, professor of biological engineering at MIT, and corresponding author of the paper. “Leveraging AMR’s expertise in mass spectrometry and RNA epitranscriptomics, our research uncovers new methods to detect complex gene networks critical for understanding and treating cancer, as well as antibiotic-resistant infections. By enabling rapid, large-scale analysis, the tool accelerates both fundamental scientific discovery and the development of targeted diagnostics and therapies that will address urgent global health challenges.”
Accelerating research, industry, and health-care applications
This versatile tool has broad applications across scientific research, industry, and health care. It enables large-scale studies of gene regulation, RNA biology, and cellular responses to environmental and therapeutic challenges. The pharmaceutical and biotech industry can harness it for drug discovery and biomarker screening, efficiently evaluating how potential drugs affect RNA modifications and cellular behavior. This aids the development of targeted therapies and personalized medical treatments.
“This is the first tool that can rapidly and quantitatively profile RNA modifications across thousands of samples,” says Jingjing Sun, research scientist at SMART AMR and first author of the paper. “It has not only allowed us to discover new RNA-modifying enzymes and gene networks, but also opens the door to identifying biomarkers and therapeutic targets for diseases such as cancer and antibiotic-resistant infections. For the first time, large-scale epitranscriptomic analysis is practical and accessible.”
Looking ahead: advancing clinical and pharmaceutical applications
Moving forward, SMART AMR plans to expand the tool’s capabilities to analyze RNA modifications in human cells and tissues, moving beyond microbial models to deepen understanding of disease mechanisms in humans. Future efforts will focus on integrating the platform into clinical research to accelerate the discovery of biomarkers and therapeutic targets. The translation of the technology into an epitranscriptome-wide analysis tool that can be used in pharmaceutical and health-care settings will drive the development of more effective and personalized treatments.
The research conducted at SMART is supported by the National Research Foundation Singapore under its Campus for Research Excellence and Technological Enterprise program.
A team with the Singapore-MIT Alliance for Research and Technology developed a new tool that enables fast, automated profiling of tRNA modifications — molecular changes that regulate how cells survive, adapt to stress, and respond to disease.
Researchers at the Antimicrobial Resistance (AMR) interdisciplinary research group of the Singapore-MIT Alliance for Research and Technology (SMART), MIT’s research enterprise in Singapore, have developed a powerful tool capable of scanning thousands of biological samples to detect transfer ribonucleic acid (tRNA) modifications — tiny chemical changes to RNA molecules that help control how cells grow, adapt to stress, and respond to diseases such as cancer and antibiotic‑resistant infections. This tool opens up new possibilities for science, health care, and industry — from accelerating disease research and enabling more precise diagnostics to guiding the development of more effective medical treatments for diseases such as cancer and antibiotic-resistant infections.
For this study, the SMART AMR team worked in collaboration with researchers at MIT, Nanyang Technological University in Singapore, the University of Florida, the University at Albany in New York, and Lodz University of Technology in Poland.
Addressing current limitations in RNA modification profiling
Cancer and infectious diseases are complicated health conditions in which cells are forced to function abnormally by mutations in their genetic material or by instructions from an invading microorganism. The SMART-led research team is among the world’s leaders in understanding how the epitranscriptome — the over 170 different chemical modifications of all forms of RNA — controls growth of normal cells and how cells respond to stressful changes in the environment, such as loss of nutrients or exposure to toxic chemicals. The researchers are also studying how this system is corrupted in cancer or exploited by viruses, bacteria, and parasites in infectious diseases.
Current molecular methods used to study the expansive epitranscriptome and all of the thousands of different types of modified RNA are often slow, labor-intensive, costly, and involve hazardous chemicals, which limits research capacity and speed.
To solve this problem, the SMART team developed a new tool that enables fast, automated profiling of tRNA modifications — molecular changes that regulate how cells survive, adapt to stress, and respond to disease. This capability allows scientists to map cell regulatory networks, discover novel enzymes, and link molecular patterns to disease mechanisms, paving the way for better drug discovery and development, and more accurate disease diagnostics.
Unlocking the complexity of RNA modifications
SMART’s open-access research, recently published in Nucleic Acids Research and titled “tRNA modification profiling reveals epitranscriptome regulatory networks in Pseudomonas aeruginosa,” shows that the tool has already enabled the discovery of previously unknown RNA-modifying enzymes and the mapping of complex gene regulatory networks. These networks are crucial for cellular adaptation to stress and disease, providing important insights into how RNA modifications control bacterial survival mechanisms.
Using robotic liquid handlers, researchers extracted tRNA from more than 5,700 genetically modified strains of Pseudomonas aeruginosa, a bacterium that causes infections such as pneumonia, urinary tract infections, bloodstream infections, and wound infections. Samples were enzymatically digested and analyzed by liquid chromatography-tandem mass spectrometry (LC-MS/MS), a technique that separates molecules based on their physical properties and identifies them with high precision and sensitivity.
As part of the study, the process generated over 200,000 data points in a high-resolution approach that revealed new tRNA-modifying enzymes and simplified gene networks controlling how cells respond and adapt to stress. For example, the data revealed that the methylthiotransferase MiaB, one of the enzymes responsible for tRNA modification ms2i6A, was found to be sensitive to the availability of iron and sulfur and to metabolic changes when oxygen is low. Discoveries like this highlight how cells respond to environmental stresses, and could lead to future development of therapies or diagnostics.
SMART’s automated system was specially designed to profile tRNA modifications across thousands of samples rapidly and safely. Unlike traditional methods, this tool integrates robotics to automate sample preparation and analysis, eliminating the need for hazardous chemical handling and reducing costs. This advancement increases safety, throughput, and affordability, enabling routine large-scale use in research and clinical labs.
A faster and automated way to study RNA
As the first system capable of quantitative, system‑wide profiling of tRNA modifications at this scale, the tool provides a unique and comprehensive view of the epitranscriptome — the complete set of RNA chemical modifications within cells. This capability allows researchers to validate hypotheses about RNA modifications, uncover novel biology, and identify promising molecular targets for developing new therapies.
“This pioneering tool marks a transformative advance in decoding the complex language of RNA modifications that regulate cellular responses,” says Professor Peter Dedon, co-lead principal investigator at SMART AMR, professor of biological engineering at MIT, and corresponding author of the paper. “Leveraging AMR’s expertise in mass spectrometry and RNA epitranscriptomics, our research uncovers new methods to detect complex gene networks critical for understanding and treating cancer, as well as antibiotic-resistant infections. By enabling rapid, large-scale analysis, the tool accelerates both fundamental scientific discovery and the development of targeted diagnostics and therapies that will address urgent global health challenges.”
Accelerating research, industry, and health-care applications
This versatile tool has broad applications across scientific research, industry, and health care. It enables large-scale studies of gene regulation, RNA biology, and cellular responses to environmental and therapeutic challenges. The pharmaceutical and biotech industry can harness it for drug discovery and biomarker screening, efficiently evaluating how potential drugs affect RNA modifications and cellular behavior. This aids the development of targeted therapies and personalized medical treatments.
“This is the first tool that can rapidly and quantitatively profile RNA modifications across thousands of samples,” says Jingjing Sun, research scientist at SMART AMR and first author of the paper. “It has not only allowed us to discover new RNA-modifying enzymes and gene networks, but also opens the door to identifying biomarkers and therapeutic targets for diseases such as cancer and antibiotic-resistant infections. For the first time, large-scale epitranscriptomic analysis is practical and accessible.”
Looking ahead: advancing clinical and pharmaceutical applications
Moving forward, SMART AMR plans to expand the tool’s capabilities to analyze RNA modifications in human cells and tissues, moving beyond microbial models to deepen understanding of disease mechanisms in humans. Future efforts will focus on integrating the platform into clinical research to accelerate the discovery of biomarkers and therapeutic targets. The translation of the technology into an epitranscriptome-wide analysis tool that can be used in pharmaceutical and health-care settings will drive the development of more effective and personalized treatments.
The research conducted at SMART is supported by the National Research Foundation Singapore under its Campus for Research Excellence and Technological Enterprise program.
A team with the Singapore-MIT Alliance for Research and Technology developed a new tool that enables fast, automated profiling of tRNA modifications — molecular changes that regulate how cells survive, adapt to stress, and respond to disease.
Harvard has played a major role in shaping Greater Boston into a global biomedical hub, with overlapping schools, hospitals, and firms that offer state-of-the-art facilities and lifesaving medical research.
But for decades, many of the same institutions have also dispatched hundreds of doctors, researchers, and students to the poorest parts of the planet — where what would be routine care in Longwood often becomes a coin flip.
Even with hundreds of grants terminated or on hold, these efforts continue. But the work sits uneasily in a moment when U.S.-sponsored foreign aid has fallen into disfavor.
As faculty director of the Harvard Global Health Institute, Louise Ivers has been thinking hard about how to defend some of the University’s most altruistic work in a climate of skepticism.
That kind of rhetoric doesn’t come naturally to Ivers.“I don’t work in global health because I think it’s good for Americans, or because of the ‘return on investment’ — not in those terms at all,” she said. “I do it because I believe in a shared humanity, and in global solidarity, and because I think it is our responsibility.”
Nevertheless, she said, there is an argument to be made — one she links to the late Paul Farmer, her friend and mentor and a legend at Harvard and in global health.
“Paul [Farmer] thought Harvard’s physicians, who work in the best-resourced institutes in the country, should also work in the most under-resourced communities in the world. Wrestling between those two experiences can improve both of them.”
Louise Ivers, faculty director of the Harvard Global Health Institute
Starting around 2003, Ivers forged a bond with Farmer in central Haiti, at clinics run by Partners In Health, the organization he co-founded. There, Ivers — an infectious-disease specialist born and educated in Ireland — contended with waves of HIV and tuberculosis even as she provided the day-to-day care demanded by malnutrition and extreme poverty.
“Paul thought Harvard’s physicians, who work in the best-resourced institutes in the country, should also work in the most under-resourced communities in the world,” she said. “Wrestling between those two experiences can improve both of them.”
The nation’s COVID response is a case in point.
In the early months of the pandemic, Massachusetts turned to Partners In Health to provide contact tracing, based on “what the organization had learned working in places like Haiti, with the cholera outbreak, or illnesses like Ebola in West Africa,” Ivers said.
And doctors who have worked in bare-bones clinics across the Global South learn to do more with less — as they had to do in 2020, when ventilators, masks, and diagnostic tests were in short supply worldwide. The work leaves them humbled.
“In Haiti, I learned how necessary it was not to try to force things on the community that they didn’t help come up with,” Ivers said. She learned Haitian Creole, visited patients’ homes, and gave comfort to families grieving children lost to malnutrition or disease.
So as American healthcare becomes more inclusive, she said, we all owe a debt to Haiti.
Today Ivers occupies a multifaceted role like the one Farmer held before his death in 2022: She is a clinician and researcher in Haiti and Africa, a central node in a network of Harvard-affiliated doctors and their partners around the world, and an evangelist for their work at home.
Each month, HGHI hosts virtual conversations that aim to clarify what global health is — and isn’t. For instance, they note, foreign aid accounts for about 1 percent of U.S. annual spending, not the 20 or 30 percent people sometimes estimate it to be.
Even amid attacks on American health programs such as USAID and PEPFAR, global health remains the most popular secondary field among Harvard undergrads, Ivers said.
And many medical students and younger professors of medicine are carrying the work forward.
Like Ivers, Amir Mohareb is an infectious disease specialist. He divides his time between rotations at Mass General Hospital and far-flung zones of global need. The “wrestling” can be painful, Mohareb said. “When we’re overseas, treating someone, we’ll think, ‘I could treat this at home, we could save this person’s life. Here we can’t.’”
But it is also productive. In just the past five years, alongside his frontline care, Mohareb has published dozens of research articles on subjects like oil spills, prison outbreaks, and the deadly risks of migration through Central and South America.
Recently, his work took him to the Darién Gap, on the Panama-Colombia border.
“It’s a region of dense rainforest, and it’s extremely dangerous,” Mohareb said, citing mosquito-borne infections, cartel violence, and unpredictable weather. “But up until this year, probably, people have been migrating through it, in the hundreds of thousands” annually en route to the United States.”
In a 2023 article for The Lancet, Mohareb and Panamanian colleagues gauged the consequences. They used 138 autopsies — a small sample, Mohareb said — to reconstruct the lives of migrants lost in the gap. They found many young adults — but also children and the elderly; Haitian and Ecuadorian migrants were flanked by others from far-off Africa.
Mortality during migration to U.S. through Darién Gap, Panama, 2018–22
Through his research, Mohareb hopes to resurface stories that many in the West aren’t told — or don’t want to hear.
Being a doctor always struck him as a “sacred privilege,” he said. But research strikes him as a privilege, too, offering another kind of relief to its subjects.
“People are sharing vulnerable, sensitive information, and — when the projects are done right — they feel validated and affirmed in their perceptions: that this is happening, not just happening to me.”
Philosophers, psychiatrist consider what we lose when we outsource struggle to AI
Sy Boles
Harvard Staff Writer
9 min read
Creative output has traditionally required effort — hours spent staring down the empty page, crumpled drafts tossed in the trash. But through years or decades of dedicated toil, one could achieve mastery and derive meaning from one’s accomplishments. Generative AI is poised to change that equation. Can we derive meaning from art produced with no effort?
The Gazette asked philosophers and psychiatrists about the value of struggle itself — and what we lose if there’s an easier way out. Their comments have been edited for length and clarity.
Different philosophers of action have different views about what exactly trying is, and people will disagree about the relevance of how hard you tried and how creditworthy you are of the eventual results.
Some people — and this is not my view — think, for example, that if somebody is a natural at something, like if they’re a virtuoso violin player, then the very fact that the person is not trying hard is part of what’s so impressive. People with that kind of view are not going to be so worried about not trying hard per se as a curtailment on an accomplishment.
That’s not my view. I love trying hard. I think there are lots and lots of cases in which it just seems intuitive that the resources that somebody invests in pursuit of a goal redound to the person’s credit and make the result something that reflects more well on the person. And when I say resources, I’m thinking about the obvious things like time and money, but I’m also thinking about the more intangible, harder-to-measure things like cognitive resources or emotional resources.
Now, there are contexts where it only matters that the outcome is good: What matters is just that you get results and the output serves some purpose, and it doesn’t matter so much that it reflects really well on you. Those are the kinds of cases in which outsourcing is fine. But there are other arenas in which I want the work to reflect well on me. I don’t just care that the paper is good or is right: It matters to me that I wrote it.
I just want to give one caveat. There are also cases in which it doesn’t seem plausible that adding extra effort is a good idea. It’s not like effort is an end in itself, regardless of the good of the goal that you’re trying to attain. It’s like trying hard but not smart: I’m wasting my resources by unnecessarily flinging them into a dumb direction. So the hard question always for us as finite beings is the question about resource management. There are some kinds of outsourcing that make sense in context because they free up resources that we can better use elsewhere. We just have to accept the fact that we might not deserve as much credit for certain of these outcomes.
Mathias Risse.
Niles Singer/Harvard Staff Photographer
Maintaining standards of ‘distinctly human excellence’
Mathias Risse, Director of the Carr-Ryan Center for Human Rights; Berthold Beitz Professor in Human Rights, Global Affairs and Philosophy
I work with chatbots quite a bit just to see where they’re at. Anthropic’s Claude is my favorite. I’ve actually come to the conclusion that as of now, Claude has co-author status. At the highest level of philosophical inquiry, you can feed very sophisticated lines of reasoning and ask Claude for commentary, and Claude is there, Claude can do it. Writing books the way we have been writing books no longer makes much sense. Anybody who wants to write a book will write a much better book and will do it much faster with an AI co-author. It’s an absolutely stunning situation.
But I deliberately chose the word ‘co-author.’ You’re not outsourcing the work away; there’s just more going on. We are definitely not at a stage where you would just read whatever Claude puts together and then take it at face value; you still need a person who can judge it.
People of my generation, with my level of education, are the perfect people to use this technology. We’ve learned what we know without anything like AI. I know how to read a text by myself, I know how to do the research. But I worry that 50 years from now we’ll only have people who learned with these devices present, and it will be harder and harder to motivate people to get an education, both in order to judge what the devices are doing and also in order to live up to ideas of human excellence.
Some people inherently care about acquiring skills for the sake of acquiring skills. But most people have a lazier attitude. We need to motivate future generations to maintain the level of human excellence that previous generations have made possible, even though it is easy to outsource it. We need to find ways of focusing on living a distinctly human life, maintaining standards of distinctly human excellence, simply because they are the standards of distinctly human excellence.
Jeff Behrends.
Veasey Conway/Harvard Staff Photographer
‘Maybe we’re better off writing our own emails’
Jeff Behrends, Director of Ethics and Technology Initiatives at the Edmond J. Safra Center for Ethics; Senior Research Scholar and Associate Senior Lecturer on Philosophy
Nearly all of the main competitors will agree that a typical good life for a human will involve a struggle for achievement. It’ll involve hard work toward some end. But the theories disagree about why.
Maybe hard work is good for us because it’s pleasant to arrive at the end. Some theories posit that it’s getting what you want that’s good. And then still others come at it hyper-directly: They say, independently of how it feels at the end, it’s good for us to have the experience of sacrificing and then succeeding, to do actual labor and have it pay off.
But this technology makes it more realistic that you can divorce the labor from the outcome. If the only thing that matters for flourishing is feeling good, then maybe all we really need is frictionless dopamine hits. You can chat with your romance bot just as frictionlessly as you can access gambling sites and pornography sites. You can read whatever fiction you want the chatbot to spill out, without putting in any intellectual work to explore and discover on your own. There is a very serious way in which these pieces of technology can make vivid why philosophical theorizing about welfare could end up mattering a lot.
It’s all well and good to think about how AI could optimize our labor in some specific use case. There are all kinds of use cases that I have no pessimism about whatsoever: solving protein folding, doing targeted drug discovery. All of that is incredible. But I do worry about the use cases that are more general.
If you listen to the technologists themselves, we’re talking about a massive social experiment in which the ways that we have been organizing our lives for hundreds of years are massively upended. If that’s what’s going on, then I think we had better pay a lot of attention to the typical human response to that. We want to be careful not to massively disrupt what seems like part of an ordinary human experience.
We need to go back to basics. The whole project has to be oriented around what is conducive to human flourishing. Maybe we’re better off writing our own emails. Maybe we’re better off having slightly suboptimal solutions in various spaces so long as we retain the elements that are core to ordinary human interaction.
Robert Waldinger.
Harvard file photo
‘It’s not just the writing: It’s thinking.’
Robert Waldinger, Clinical Professor of Psychiatry at Harvard Medical School; Director of the Harvard Study of Adult Development
In the study of adult development, people talk a lot about being proud of what they did. Some people were proud of winning awards or becoming CEO — those things are nice — but what really endured was the sense of, “I did good work, and it meant something to me, and it meant something to other people.”
So I do think that there was a kind of pride in working hard, in working diligently. It’s a kind of ethic. Sure, there are people who are happy to get away with doing as little as possible. Maybe not as many of those people are attracted to places like Harvard; it’s kind of a self-selecting group. But a lot of what’s important for many of us is the sense of, “I got better at this. I learned to do this. I learned to do it well.” There’s something satisfying about it.
I use AI sometimes. I put some writing into AI and say, “Make this better,” and it does. And then I feel guilty. It did in three seconds what might have taken me an hour. My God, I spent all these years honing my writing skills. I went to public school in Des Moines, Iowa, and we had to write an essay every week. I agonized over that stuff, and I got better at it. When I was 12 and having to write that weekly essay, if you had told me I could not do it, of course, I would have said, “Great!” But now I’m glad that they made me do it.
So part of it is this skill development I feel proud of. But it’s not just the writing: It’s thinking. Does the first part of this sentence logically lead to the second? It’s a way of honing our ability to think, not just to string words together. What if we don’t have to do that anymore?
I practice Zen, and Zen very much emphasizes each moment. How do you want to spend your moments? I could be retired now. I don’t need to keep working. But I’m working because I really get satisfaction from doing the work we do. I think there is some intrinsic satisfaction in the journey, not just the destination.
Cornell will send its largest-ever delegation to Climate Week NYC 2025, to present on issues including the renewable energy transition, protecting public health from heat waves and addressing the impact of climate change on housing.
Cytora’s platform helps insurers digitise their risk data at scale, turning complex documents and unstructured information into structured, decision-ready formats. The acquisition brings together Cytora’s AI-enabled risk digitisation platform with Applied Systems' suite of insurance solutions, enabling greater intelligent automation, connectivity and efficiency across the insurance lifecycle. This combination is expected to unlock increased growth and productivity across the sector.
Cytora was founded in 2012 as a University startup with early support from Cambridge Enterprise, which first invested in the company in 2014. Recognising the potential of its technology to transform risk analytics and insurance workflows, Cambridge Enterprise continued to support Cytora through two subsequent investment rounds, backing its evolution from a geopolitical risk analytics start-up into a global provider of AI-powered solutions for risk digitisation.
Amanda Wooding, Deputy Head of Ventures, Cambridge Enterprise Ventures, said: “We are delighted to see Cytora reach this exciting milestone. The acquisition by Applied Systems is a strong endorsement of the transformative impact of their technology on the insurance industry. Supporting Cytora from its early stages has been a privilege and we are proud to have played a part in their journey from a Cambridge startup to the leading risk digitisation platform in the insurance industry.”
This acquisition marks a significant milestone for Cambridge Enterprise Ventures and its mission to support the commercialisation of University research. It reflects the long-term value of investing in early-stage ventures and the potential of Cambridge-founded companies to shape global industries.
Applied Systems, a global provider of insurance software solutions, has acquired Cytora, a University of Cambridge AI spinout that has become the leading digital risk-processing platform for the insurance industry.
In his new book, Cornell professor and historian Thomas J. Campanella shines a light on a pair of alumni from a century ago who helped create some of New York City’s most recognizable sights but have been largely overlooked.
As generative AI reshapes how we communicate, work, and make decisions, Angelina Wang is making sure these systems serve everyone — not just a privileged few.
By Dr Ruklanthi de Alwis, Deputy Director of the Centre for Outbreak Preparedness and Asst Prof in the Emerging Infectious Diseases Programme, Duke-NUS Medical School; Assoc Prof Yeo Tsin Wen, from the Lee Kong Chian School of Medicine at NTU and from Woodlands Health; Prof Lisa Ng, Executive Director of the Infectious Diseases Labs at A*STAR; and Prof Neelika Malavige, from the University of Sri Jayewardenepura
The Romans have long been credited with bringing industry to Britain involving large-scale lead and iron production. But it has been unclear what happened once the Romans left around 400 AD. It was generally assumed that industrial-scale production declined, as no written evidence for lead exploitation after the 3rd century exists.
To test this assumption, researchers from the Universities of Cambridge and Nottingham examined a five-metre-long sediment core from Aldborough in Yorkshire, the Roman tribal town of the Brigantes and an important centre of metal production. Their findings, published in the journal Antiquity, confirm that metal production did not collapse immediately after the Romans left Britain.
Professor Martin Millett, from Cambridge’s Faculty of Classics and Fitzwilliam College, said: “This collaborative work which forms part of a long-term project at Aldborough adds a new dimension to our understanding of the history of this important Roman town in the immediately post-Roman period. It has significant implications for our wider understanding of the end of Roman Britain.”
The study’s findings indicate that metal production in Britain continued long after the end of the Roman period and did not decline until a sudden crash around 550-600 AD.
The researchers found low levels of lead and iron production in the 4th to the early 5th centuries AD, but a large continuous rise in iron – and to a lesser extent, lead smelting through the 5th to mid-6th centuries – with the same ore sources and use of coal as in the Roman period. This undermines the popular belief that post-Roman Britain was a ‘Dark Age’ in which industrial production regressed to pre-Roman levels.
The cause of the sudden crash remains uncertain, but textual evidence from the Mediterranean and modern-day France (from the mid-late 6th century) shows that this period saw multiple waves of bubonic plague, and perhaps smallpox. These findings combined with DNA evidence from Edix Hill cemetery in Cambridgeshire show that bubonic plague was killing people in eastern England from the 540s, and this period marked the point of transformation at Aldborough.
Lead author, Professor Christopher Loveluck from Nottingham’s Department of Classics and Archaeology, says the Aldborough sediment core “has provided the first unbroken continuous record and timeline of metal pollution and metal economic history in Britain, from the 5th century to the present day.”
The cylinder of slowly accumulated silts was extracted from a paleochannel of the River Ure. Previous metal pollution records have been extracted far from their sources – for instance upland peat cores or mountain and polar glaciers – but this data comes from the very epicentre of production.
The researchers analysed the core alongside excavation evidence and knowledge of landscape changes at Aldborough over the last two millennia. The study benefited from the expertise of Charles French, Emeritus Professor of Geoarchaeology at Cambridge, who applies archaeological techniques and micromorphological analytical techniques to the interpretation of buried landscapes.
The study indicates that lead and iron production was very active again before the Vikings arrived and expanded under their control. Textual and archaeological sources already suggest that there was a growing focus on domestic economies rather than international trade by that time. It has been difficult to prove this at a macro-scale, but the new results show a boom in raw metal production between the end of the 8th century and through to the 10th century, revealing regional-level economic growth, which has never been measured beyond single sites before.
The study goes on to show a decline in metal production through the 11th century with renewed large-scale growth in lead and iron production reflected again from the mid-12th to early 13th centuries. Results corroborate annual-written sources for increased Yorkshire and wider British lead production from the 1160s–1220, and comparable pollution increases attributed to Britain for these decades recovered previously from Swedish lakes and Alpine ice-core research from Switzerland.
Following a decline in the 14th century, the researchers found evidence of another recovery in production which was cut short by Henry VIII’s Dissolution of the Monasteries from 1536-38.
“It became uneconomical to make fresh metal because it was ripped off all the monasteries, abbeys and religious houses,” Professor Loveluck explains. “Large-scale production resumed in the later 16th century to resource Elizabeth I’s Spanish and French wars.”
The Aldborough Roman Town Project, directed by Dr Rose Ferraby – an author of the new study – and Professor Martin Millett, from Cambridge’s Faculty of Classics, has carried out nearly 120 hectares of magnetometry inside the town and beyond, to establish a landscape scale view of the sub-surface archaeological remains of the town, its defences, road system and extra-mural areas. It has also used Ground Penetrating Radar more selectively within the town to reveal details and depths of the Roman buildings. Since 2016, a number of excavations have been carried out, re-examining earlier trenches.
Funding
The research was funded by The British Academy and the University of Cambridge.
Britain’s industrial economy did not collapse when the Romans left and went on to enjoy a Viking-age industrial boom, a new study finds, undermining a stubborn ‘Dark Ages’ narrative.
It has significant implications for our wider understanding of the end of Roman Britain
Solar photovoltaics (PV) and green roofs are increasingly being adopted worldwide as sustainable solutions for urban environments. While PV systems help to reduce reliance on fossil fuels and lower greenhouse gas emissions, green roofs lower building energy use for air conditioning, mitigate urban heat island effects, and enhance the aesthetics of rooftops.
When combined, these systems allow for more efficient use of rooftop space by harnessing the cooling benefits of rooftop greenery and the generation of renewable electricity from solar panels.
A joint research study by the National University of Singapore (NUS), the Building and Construction Authority (BCA), and the National Parks Board (NParks) demonstrates the benefits of co-locating solar panels and green roofs in tropical climates – an area that is less well-studied compared to temperate regions.
“Our study shows that co-locating solar photovoltaics with green roofs in a tropical climate is technically feasible with multiple benefits – from improving solar panel performance and supporting greenery growth, to lowering roof surface temperature. These findings highlight the potential of integrating solar energy generation with rooftop greenery to advance sustainable urban developments in Singapore and beyond,” said Assoc Prof Tay.
Comparative analysis of four different set-ups
To evaluate the performance of co-located solar panels and green roofs under tropical conditions, an experimental study was conducted on the seventh-floor rooftop of Alexandra Primary School. Four different set-ups were monitored over a 12-month period, from November 2021 to October 2022, and a comparative analysis was conducted to assess the impact on solar energy generation, plant growth, and roof temperature.
The four rooftop set-ups comprised a bare concrete roof (BR-CT), a bare green roof (BR-GR), solar panels on bare concrete (PV-CT), and solar panels with green roof (PV-GR).
The study found that the PV-GR set-up delivered the highest overall performance, achieving both the highest photovoltaic performance, along with the lowest and most stable roof surface temperature. This enhances solar energy generation while providing effective cooling through greenery and shading. In addition, greenery growth in the PV-GR set-up improved significantly compared to the BR-GR set-up, with a 19.8 per cent higher horizontal coverage.
Other benefits were also highlighted in the study. Firstly, in the PV-GR set-up, evapotranspiration from plants was found to have a cooling effect on solar panels, which increased the performance ratio of the solar panels by an average of 1.3 per cent under clear sky conditions as compared to the PV-CT set-up.
Secondly, the co-location of solar panels and green roofs allows buildings to stay cooler. Roof surface temperatures were reduced by up to 4.7 deg C when compared to the BR-CT set-up, while indoor ceiling surface temperatures decreased by as much as 3 deg C when compared to the PV-CT set-up, as the roof is shielded from direct sunlight. Additionally, the PV-GR set-up exhibited the lowest variability in indoor ceiling temperatures, with average ceiling temperatures remaining below 30.5 deg C when compared with the PV-CT set-up, where indoor ceiling temperature can exceed 33 deg C in the day.
Thirdly, the research also tested five shade-tolerant plant species for green roof applications. Among them, Pilea depressa, Pilea nummulariifolia, Heterotis rotundifolia, and Sphagneticola trilobata achieved an average of 20 per cent higher horizontal coverage for the PV-GR set-up. Notably, Pilea depressa showed the most significant improvement when compared to the BR-GR set-up, indicating its suitability for co-located solar panel–green roof installations.
Optimising Singapore’s rooftop spaces for synergistic benefits
This study demonstrates that co-locating solar panels with green roofs is technically feasible, and offers three key benefits – enhanced solar energy generation, improved greenery coverage, and reduced thermal impact to the building.
For building owners, the outcomes from this study demonstrate the feasibility and added benefits of solar panel-green roof systems. Cumulatively, the impact from these benefits may translate into potential cost savings.
Given Singapore’s limited rooftop space, the co-location of solar panels and green roofs represents a smarter use of space, contributing to the nation’s green building initiatives and advancing its vision of becoming both a “Low-Carbon City” and a “City in Nature”.
The autumn semester at ETH Zurich begins on 15 September. Whereas the number of new Bachelor’s degree students remains flat, the number of Master’s degree students has seen a slight increase.
“No man is an island.” This maxim held true during the inaugural Shaping Healthcare Innovation For Tomorrow (SHIFT) hackathon, a ground-up hackathon that empowers students to co-create innovative solutions to real-world healthcare challenges.
Organised by the Singapore Nursing Innovation Group (SNIG), a student-led initiative under the NUS Alice Lee Centre for Nursing Studies (NUS Nursing), over 100 students from five local universities and polytechnics — NUS, the Singapore Institute of Technology (SIT), Ngee Ann Polytechnic , Nanyang Polytechnic (NYP) and the Institute of Technical Education — collaborated on winning innovations spanning three subthemes: Chronic Disease Prevention & Early Detection, Lifestyle & Behavioural Health Modification, and Ageing Population & Preventive Care.
Held from 31 August to 6 September 2025, the interdisciplinary hackathon comprised almost 30 teams, with at least one Nursing student per team, offering a valuable opportunity for students to collaborate across fields and gain mentorship from healthcare professionals.
At the final showcase on 6 September 2025,three teams walked away with top honours for their innovative ideas — a smart sock, an Artificial Intelligence (AI) social network, and a behavioural change virtual pet — all poised to shape the future of healthcare.
Grand Prize: SafeStep Smart Sock
The top prize went to Team Wildcard — a trio from NUS comprising Magdalene Lim from NUS Nursing, Roopashini Sivananthan from NUS College of Design and Engineering, and Hanzalah bin Azmi from the NUS Faculty of Law. Their prototype, SafeStep, is a breathable smart sock equipped with sensors that detect falls in real time and alert both wearer and caregiver. Designed for visually impaired elders, the sock aims to reduce fall-related injuries, one of the greatest impediments to seniors’ independence.
The smart sock is not the team’s first healthcare project. The trio, who first met at a design thinking workshop, had earlier collaborated on a pacifier for babies with cleft palates. They are now testing the viability and user experience of SafeStep by advancing it into the prototyping stage, with plans to conduct user interviews and design validation to ensure it meets patient and caregiver needs.
For Magdalene, the hackathon was a chance to bridge classroom learning with real-world insights. “As a student nurse, I don’t always have the full picture of what is being done in the community,” she said. “Our mentor’s wealth of experience really helped us see where the true pain points and gaps lie. That guidance shaped how we refined our idea, and it gave me a much deeper appreciation of how innovation must be rooted in real-world practice.”
1st Runner-Up: KampongNET for Seniors
Ctrl + Alt + Debride — comprising Winnie Low and Ivan Tan from NUS Nursing and NUS Faculty of Science (FoS), Maryam Syamilah Binti Mahmood Shah from SIT Computing Science, and Yi Jiaxin and Onquit Jake Davis Areglo from NYP School of Information Technology — took second place with KampongNET, a digital social platform with an AI voice assistant. Built for seniors living in rental homes within Silver Zone areas, the solution combats social isolation by fostering conversations and connections.
Several members of the team first met in NYP as volunteers. Winnie was the catalyst who brought them together, spurred by the opportunity to merge two disciplines rarely seen side by side. “Coding and nursing are such distinct fields, but through this hackathon we finally had the opportunity to merge them. I don’t see many nursing-focused hackathons — most are technically heavy and can feel daunting. When I saw this opportunity, I wanted to give it a try, especially with friends I knew from NYP,” she said.
2nd Runner-Up: HabiTot Virtual Pet
In third place, The Fantastic Four — Jin Li Yao and Daniel Chong Zhao Yang from FoS), Guo Xinyi from NUS Nursing, and Peh Jia En Leticia from NUS Faculty of Arts and Social Sciences — developed HabiTot, a Tamagotchi-inspired virtual pet that rewards children for spending less time on screens. The playful interface nudges kids toward social interaction, hobbies, and healthier daily routines.
Team member Li Yao said the hackathon provided a springboard for the team to think bigger. “We see real potential in HabiTot, and when time allows, we hope to expand the idea into a patent and explore collaborations to turn it into a working prototype. This hackathon gave us a starting point, and we’re excited to see how far it could go with the right partners.”
For NUS Nursing Asst Prof Jocelyn Chew, who founded SNIG, this stemmed from a conviction that nursing should be recognised not only for its role in bedside care, but also for its strength in problem-solving at the frontlines.
Her vision is for SNIG to grow into a national platform where nurses at all levels engage in cross-institutional collaboration, entrepreneurship, and evidence-based change. “We want to create an ecosystem that empowers nurses to go beyond solely delivering care, to also design the systems, technologies, and models of care that will shape the future of healthcare,” she said. That philosophy inspired SNIG’s first flagship project, the SHIFT Hackathon — a proving ground where students, practitioners, technologists, and community partners came together to co-create solutions.
SHIFT’s student organising committee, led by third-year NUS Nursing students Weslyn Low and Magdalene Tong, spearheaded the planning of the hackathon. They were supported by 25 nursing mentors from hospitals across Singapore, who guided teams through the fast-paced ideation and prototyping process.
Dr Lee Yee Mei, Deputy Director of Nursing at National University Hospital and one of the mentors, described the hackathon as both inspiring and a proud moment for the profession. Seeing students step up to innovate, she said, showed their potential to create ideas that benefit patients and nurses alike. She encouraged students to view innovation as part of nursing education itself — where even something as basic as measuring vital signs can be re-imagined.
Mr Wong Kok Cheong, Deputy Director of Nursing at Changi General Hospital and one of the judges for the hackathon, shared that it was indicative of a larger shift in nursing as a profession. “Nursing innovation is the next milestone for nurses – with an ageing population and shrinking workforce, innovation is essential to improving productivity and patient outcomes.”
At MIT, a few scribbles on a whiteboard can turn into a potentially transformational cancer treatment.
This scenario came to fruition this week when the U.S. Food and Drug Administration approved a system for treating an aggressive form of bladder cancer. More than a decade ago, the system started as an idea in the lab of MIT Professor Michael Cima at the Koch Institute for Integrative Cancer Research, enabled by funding from the National Institutes of Health and MIT’s Deshpande Center.
The work that started with a few researchers at MIT turned into a startup, TARIS Biomedical LLC, that was co-founded by Cima and David H. Koch Institute Professor Robert Langer, and acquired by Johnson & Johnson in 2019. In developing the core concept of a device for local drug delivery to the bladder — which represents a new paradigm in bladder cancer treatment — the MIT team approached drug delivery like an engineering problem.
“We spoke to urologists and sketched out the problems with past treatments to get to a set of design parameters,” says Cima, a David H. Koch Professor of Engineering and professor of materials science and engineering. “Part of our criteria was it had to fit into urologists’ existing procedures. We wanted urologists to know what to do with the system without even reading the instructions for use. That’s pretty much how it came out.”
To date, the system has been used in patients thousands of times. In one study involving people with high-risk, non-muscle-invasive bladder cancer whose disease had proven resistant to standard care, doctors could find no evidence of cancer in 82.4 percent of patients treated with the system. More than 50 percent of those patients were still cancer-free nine months after treatment.
The results are extremely gratifying for the team of researchers that worked on it at MIT, including Langer and Heejin Lee SM ’04, PhD ’09, who developed the system as part of his PhD thesis. And Cima says far more people deserve credit than just the ones who scribbled on his whiteboard all those years ago.
“Drug products like this take an enormous amount of effort,” says Cima. “There are probably more than 1,000 people that have been involved in developing and commercializing the system: the MIT inventors, the urologists they consulted, the scientists at TARIS, the scientists at Johnson & Johnson — and that’s not including all the patients who participated in clinical trials. I also want to emphasize the importance of the MIT ecosystem, and the importance of giving people the resources to pursue arguably crazy ideas. We need to continue to support those kinds of activities.”
In the mid 2000s, Langer connected Cima with a urologist at Boston Children’s Hospital who was seeking a new treatment for a painful bladder disease known as interstitial cystitis. The standard treatment required frequent drug infusions into a patient’s bladder through a catheter, which provided only temporary relief.
A group of researchers including Cima; Lee; Hong Linh Ho Duc SM ’05, PhD ’09; Grace Kim PhD ’08; and Karen Daniel PhD ’09 began speaking with urologists and people who had run failed clinical trials involving bladder treatments to understand what went wrong. All that information went on Cima’s whiteboard over the course of several weeks. Fortunately, Cima also scribbled “Do not erase!”
“We learned a lot in the process of writing everything down,” Cima says. “We learned what not to build and what to avoid.”
With the problem well-defined, Cima received a grant from MIT’s Deshpande Center for Technological Innovation, which allowed Lee to work on designing a better solution as part of his PhD thesis.
One of the key advances the group made was using a special alloy that gave the device “shape memory” so that it could be straightened out and inserted into the bladder through a catheter. Then it would fold up, preventing it from being expelled during urination.
The new design was able to slowly release drugs over a two-week period — far longer than any other approach — and could then be removed using a thin, flexible tube commonly used in urology, called a cystoscope. The progress was enough for Cima and Langer, who are both serial entrepreneurs, to found TARIS Biomedical and license the technology from MIT. Lee and three other MIT graduates joined the company.
“It was a real pleasure working with Mike Cima, our students, and colleagues on this novel drug delivery system, which is already changing patients’ lives,” Langer says, “It’s a great example of how research at the Koch Institute starts with basic science and engineering and ends up with new treatments for cancer patients.”
The FDA’s approval of the system for the treatment of certain patients with high-risk, non-muscle-invasive bladder cancer now means that patients with this disease may have a better treatment option. Moving forward, Cima hopes the system continues to be explored to treat other diseases.
Technology conceived in Professor Michael Cima’s lab at the Koch Institute for Integrative Cancer Research (shown here), was approved by the Food and Drug Administration.
Whether you’re an artist, advertising specialist, or just looking to spruce up your home, turning everyday objects into dynamic displays is a great way to make them more visually engaging. For example, you could turn a kids’ book into a handheld cartoon of sorts, making the reading experience more immersive and memorable for a child.
But now, thanks to MIT researchers, it’s also possible to make dynamic displays without using electronics, using barrier-grid animations (or scanimations), which use printed materials instead. This visual trick involves sliding a patterned sheet across an image to create the illusion of a moving image. The secret of barrier-grid animations lies in its name: An overlay called a barrier (or grid) often resembling a picket fence moves across, rotates around, or tilts toward an image to reveal frames in an animated sequence. That underlying picture is a combination of each still, sliced and interwoven to present a different snapshot depending on the overlay’s position.
While tools exist to help artists create barrier-grid animations, they’re typically used to create barrier patterns that have straight lines. Building off of previous work in creating images that appear to move, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a tool that allows users to explore more unconventional designs. From zigzags to circular patterns, the team’s “FabObscura” software turns unique concepts into printable scanimations, helping users add dynamic animations to things like pictures, toys, and decor.
MIT Department of Electrical Engineering and Computer Science (EECS) PhD student and CSAIL researcher Ticha Sethapakdi SM ’19, a lead author on a paper presenting FabObscura, says that the system is a one-size-fits-all tool for customizing barrier-grid animations. This versatility extends to unconventional, elaborate overlay designs, like pointed, angled lines to animate a picture you might put on your desk, or the swirling, hypnotic appearance of a radial pattern you could spin over an image placed on a coin or a Frisbee.
“Our system can turn a seemingly static, abstract image into an attention-catching animation,” says Sethapakdi. “The tool lowers the barrier to entry to creating these barrier-grid animations, while helping users express a variety of designs that would’ve been very time-consuming to explore by hand.”
Behind these novel scanimations is a key finding: Barrier patterns can be expressed as any continuous mathematical function — not just straight lines. Users can type these equations into a text box within the FabObscura program, and then see how it graphs out the shape and movement of a barrier pattern. If you wanted a traditional horizontal pattern, you’d enter in a constant function, where the output is the same no matter the input, much like drawing a straight line across a graph. For a wavy design, you’d use a sine function, which is smooth and resembles a mountain range when plotted out. The system’s interface includes helpful examples of these equations to guide users toward their preferred pattern.
A simple interface for elaborate ideas
FabObscura works for all known types of barrier-grid animations, supporting a variety of user interactions. The system enables the creation of a display with an appearance that changes depending on your viewpoint. FabObscura also allows you to create displays that you can animate by sliding or rotating a barrier over an image.
To produce these designs, users can upload a folder of frames of an animation (perhaps a few stills of a horse running), or choose from a few preset sequences (like an eye blinking) and specify the angle your barrier will move. After previewing your design, you can fabricate the barrier and picture onto separate transparent sheets (or print the image on paper) using a standard 2D printer, such as an inkjet. Your image can then be placed and secured on flat, handheld items such as picture frames, phones, and books.
You can enter separate equations if you want two sequences on one surface, which the researchers call “nested animations.” Depending on how you move the barrier, you’ll see a different story being told. For example, CSAIL researchers created a car that rotates when you move its sheet vertically, but transforms into a spinning motorcycle when you slide the grid horizontally.
These customizations lead to unique household items, too. The researchers designed an interactive coaster that you can switch from displaying a “coffee” icon to symbols of a martini and a glass of water by pressing your fingers down on the edges of its surface. The team also spruced up a jar of sunflower seeds, producing a flower animation on the lid that blooms when twisted off.
Artists, including graphic designers and printmakers, could also use this tool to make dynamic pieces without needing to connect any wires. The tool saves them crucial time to explore creative, low-power designs, such as a clock with a mouse that runs along as it ticks. FabObscura could produce animated food packaging, or even reconfigurable signage for places like construction sites or stores that notify people when a particular area is closed or a machine isn’t working.
Keep it crisp
FabObscura’s barrier-grid creations do come with certain trade-offs. While nested animations are novel and more dynamic than a single-layer scanimation, their visual quality isn’t as strong. The researchers wrote design guidelines to address these challenges, recommending users upload fewer frames for nested animations to keep the interlaced image simple and stick to high-contrast images for a crisper presentation.
In the future, the researchers intend to expand what users can upload to FabObscura, like being able to drop in a video file that the program can then select the best frames from. This would lead to even more expressive barrier-grid animations.
FabObscura might also step into a new dimension: 3D. While the system is currently optimized for flat, handheld surfaces, CSAIL researchers are considering implementing their work into larger, more complex objects, possibly using 3D printers to fabricate even more elaborate illusions.
Sethapakdi wrote the paper with several CSAIL affiliates: Zhejiang University PhD student and visiting researcher Mingming Li; MIT EECS PhD student Maxine Perroni-Scharf; MIT postdoc Jiaji Li; MIT associate professors Arvind Satyanarayan and Justin Solomon; and senior author and MIT Associate Professor Stefanie Mueller, leader of the Human-Computer Interaction (HCI) Engineering Group at CSAIL. Their work will be presented at the ACM Symposium on User Interface Software and Technology (UIST) this month.
“Our system can turn a seemingly static, abstract image into an attention-catching animation,” says MIT PhD student Ticha Sethapakdi, a lead researcher on the FabObscura project. “The tool lowers the barrier to entry to creating these barrier-grid animations, while helping users express a variety of designs that would’ve been very time-consuming to explore by hand.”
Whether you’re an artist, advertising specialist, or just looking to spruce up your home, turning everyday objects into dynamic displays is a great way to make them more visually engaging. For example, you could turn a kids’ book into a handheld cartoon of sorts, making the reading experience more immersive and memorable for a child.
But now, thanks to MIT researchers, it’s also possible to make dynamic displays without using electronics, using barrier-grid animations (or scanimations), which use printed materials instead. This visual trick involves sliding a patterned sheet across an image to create the illusion of a moving image. The secret of barrier-grid animations lies in its name: An overlay called a barrier (or grid) often resembling a picket fence moves across, rotates around, or tilts toward an image to reveal frames in an animated sequence. That underlying picture is a combination of each still, sliced and interwoven to present a different snapshot depending on the overlay’s position.
While tools exist to help artists create barrier-grid animations, they’re typically used to create barrier patterns that have straight lines. Building off of previous work in creating images that appear to move, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a tool that allows users to explore more unconventional designs. From zigzags to circular patterns, the team’s “FabObscura” software turns unique concepts into printable scanimations, helping users add dynamic animations to things like pictures, toys, and decor.
MIT Department of Electrical Engineering and Computer Science (EECS) PhD student and CSAIL researcher Ticha Sethapakdi SM ’19, a lead author on a paper presenting FabObscura, says that the system is a one-size-fits-all tool for customizing barrier-grid animations. This versatility extends to unconventional, elaborate overlay designs, like pointed, angled lines to animate a picture you might put on your desk, or the swirling, hypnotic appearance of a radial pattern you could spin over an image placed on a coin or a Frisbee.
“Our system can turn a seemingly static, abstract image into an attention-catching animation,” says Sethapakdi. “The tool lowers the barrier to entry to creating these barrier-grid animations, while helping users express a variety of designs that would’ve been very time-consuming to explore by hand.”
Behind these novel scanimations is a key finding: Barrier patterns can be expressed as any continuous mathematical function — not just straight lines. Users can type these equations into a text box within the FabObscura program, and then see how it graphs out the shape and movement of a barrier pattern. If you wanted a traditional horizontal pattern, you’d enter in a constant function, where the output is the same no matter the input, much like drawing a straight line across a graph. For a wavy design, you’d use a sine function, which is smooth and resembles a mountain range when plotted out. The system’s interface includes helpful examples of these equations to guide users toward their preferred pattern.
A simple interface for elaborate ideas
FabObscura works for all known types of barrier-grid animations, supporting a variety of user interactions. The system enables the creation of a display with an appearance that changes depending on your viewpoint. FabObscura also allows you to create displays that you can animate by sliding or rotating a barrier over an image.
To produce these designs, users can upload a folder of frames of an animation (perhaps a few stills of a horse running), or choose from a few preset sequences (like an eye blinking) and specify the angle your barrier will move. After previewing your design, you can fabricate the barrier and picture onto separate transparent sheets (or print the image on paper) using a standard 2D printer, such as an inkjet. Your image can then be placed and secured on flat, handheld items such as picture frames, phones, and books.
You can enter separate equations if you want two sequences on one surface, which the researchers call “nested animations.” Depending on how you move the barrier, you’ll see a different story being told. For example, CSAIL researchers created a car that rotates when you move its sheet vertically, but transforms into a spinning motorcycle when you slide the grid horizontally.
These customizations lead to unique household items, too. The researchers designed an interactive coaster that you can switch from displaying a “coffee” icon to symbols of a martini and a glass of water by pressing your fingers down on the edges of its surface. The team also spruced up a jar of sunflower seeds, producing a flower animation on the lid that blooms when twisted off.
Artists, including graphic designers and printmakers, could also use this tool to make dynamic pieces without needing to connect any wires. The tool saves them crucial time to explore creative, low-power designs, such as a clock with a mouse that runs along as it ticks. FabObscura could produce animated food packaging, or even reconfigurable signage for places like construction sites or stores that notify people when a particular area is closed or a machine isn’t working.
Keep it crisp
FabObscura’s barrier-grid creations do come with certain trade-offs. While nested animations are novel and more dynamic than a single-layer scanimation, their visual quality isn’t as strong. The researchers wrote design guidelines to address these challenges, recommending users upload fewer frames for nested animations to keep the interlaced image simple and stick to high-contrast images for a crisper presentation.
In the future, the researchers intend to expand what users can upload to FabObscura, like being able to drop in a video file that the program can then select the best frames from. This would lead to even more expressive barrier-grid animations.
FabObscura might also step into a new dimension: 3D. While the system is currently optimized for flat, handheld surfaces, CSAIL researchers are considering implementing their work into larger, more complex objects, possibly using 3D printers to fabricate even more elaborate illusions.
Sethapakdi wrote the paper with several CSAIL affiliates: Zhejiang University PhD student and visiting researcher Mingming Li; MIT EECS PhD student Maxine Perroni-Scharf; MIT postdoc Jiaji Li; MIT associate professors Arvind Satyanarayan and Justin Solomon; and senior author and MIT Associate Professor Stefanie Mueller, leader of the Human-Computer Interaction (HCI) Engineering Group at CSAIL. Their work will be presented at the ACM Symposium on User Interface Software and Technology (UIST) this month.
“Our system can turn a seemingly static, abstract image into an attention-catching animation,” says MIT PhD student Ticha Sethapakdi, a lead researcher on the FabObscura project. “The tool lowers the barrier to entry to creating these barrier-grid animations, while helping users express a variety of designs that would’ve been very time-consuming to explore by hand.”
New blood test detects HPV-associated head and neck cancer 10 years early
Tool identifies disease before symptoms appear
Mass General Brigham Communications
3 min read
Human papillomavirus (HPV) causes an estimated 70 percent of head and neck cancers in the U.S., making it the most common cancer caused by the virus. Yet unlike cervical cancers caused by HPV, there is no screening test for HPV-associated head and neck cancers.
In a new federally funded study, Harvard-affiliated Mass General Brigham researchers show that a novel liquid biopsy tool they developed, called HPV-DeepSeek, can identify HPV-associated head and neck cancer up to 10 years before symptoms appear. By catching cancers earlier with this novel test, patients may experience higher treatment success and require a less intense regimen, according to the authors.
Findings from the study were published in the Journal of the National Cancer Institute.
“Our study shows for the first time that we can accurately detect HPV-associated cancers in asymptomatic individuals many years before they are ever diagnosed with cancer,” said lead study author Daniel L. Faden, principal investigator in the Mike Toth Head and Neck Cancer Research Center at Mass Eye and Ear and assistant professor of otolaryngology–head and neck surgery at Harvard Medical School. “By the time patients enter our clinics with symptoms from the cancer, they require treatments that cause significant, life-long side effects. We hope tools like HPV-DeepSeek will allow us to catch these cancers at their very earliest stages, which ultimately can improve patient outcomes and quality of life.”
HPV-DeepSeek uses whole-genome sequencing to detect microscopic fragments of HPV DNA that have broken off from a tumor and entered the bloodstream. Previous research from this team showed the test could achieve 99 percent specificity and 99 percent sensitivity for diagnosing cancer at the first time of presentation to a clinic, outperforming current testing methods.
To determine whether HPV-DeepSeek could detect HPV-associated head and neck cancer long before diagnosis, researchers tested 56 samples from the Mass General Brigham Biobank: 28 from individuals who went on to develop HPV-associated head and neck cancer years later, and 28 from healthy controls.
HPV-DeepSeek detected HPV tumor DNA in 22 out of 28 blood samples from patients who later developed the cancer, whereas all 28 control samples tested negative, indicating that the test is highly specific. The test was better able to detect HPV DNA in blood samples that were collected closer to the time of the patients’ diagnosis, and the earliest positive result was for a blood sample collected 7.8 years prior to diagnosis.
Using machine learning, the researchers were able to improve the test’s power so that it accurately identified 27 out of 28 cancer cases, including samples collected up to 10 years prior to diagnosis.
The authors are now validating these findings in a second blinded study funded by the National Institutes of Health using hundreds of samples collected as part of the Prostate, Lung, Colorectal, and Ovarian Cancer Screening Trial at the National Cancer Institute.
Funding for this work came from the National Institute of Dental and Craniofacial Research of the National Institutes of Health.
‘Now I have become death, the destroyer of the worlds’
Oral history offers kaleidoscopic view of angst and relief, hope and dread at test of atomic bomb 80 years ago
long read
Excerpted from “The Devil Reached Toward the Sky: An Oral History of the Making and Unleashing of the Atomic Bomb” by Garrett M. Graff ’03.
Wisconsin physicist Joseph O. Hirschfelder: It was time to get ready for the explosion. There were 300 of us assembled at our post. These included soldiers, scientists, visiting dignitaries, etc. We were all cold and tired and very, very nervous. Most of us paced up and down. We all had been given special very, very dark glasses to watch the explosion.
Rice physicist Hugh T. Richards: I was at Base Camp, 9.7 miles from ground zero. The shot was scheduled for 2:00 a.m. July 16. However, around 2:00 a.m. a heavy thunderstorm hit the base camp area and on advice of the meteorologist, the test was postponed until 5:30 a.m. to let the bad weather pass the area.
Harvard chemistry professor George B. Kistiakowsky: The thing was ready to be fired. Just before the time counting came to zero I went up to the top of the control bunker, put on dark glasses and turned away from the tower. I was rather convinced that the physicists exaggerated what would happen from a nuclear point of view. Well, I was wrong.
Brig. Gen. Thomas F. Farrell, Manhattan Project field operations chief: Dr. [J. Robert] Oppenheimer held on to a post to steady himself. For the last few seconds, he stared directly ahead.
Maj. Gen. Leslie Groves, Manhattan Project director: The blast came promptly with the zero count on July 16, 1945.
Trinity test site director Kenneth T. Bainbridge: The bomb detonated at 5:29:45 a.m.
Farrell: In that brief instant in the remote New Mexico desert the tremendous effort of the brains and brawn of all these people came suddenly and startlingly to the fullest fruition.
Los Alamos physicist Robert Christy: Oh, it was a dramatic thing!
The bomb’s core is loaded into a vehicle at the Army-owned McDonald ranch house, where it was assembled, to be transported to the nearby firing tower at the test site.
US Department of Energy, Historian’s Office
Los Alamos technician Val L. Fitch: It took about 30 millionths of a second for the flash of light from the explosion to reach us outside the bunker at S-10.
N.Y. Times reporter William L. Laurence: There rose from the bowels of the earth a light not of this world, the light of many suns in one.
Hirschfelder: All of a sudden, the night turned into day.
Groves: My first impression was one of tremendous light.
Physicist Warren Nyer: The most brilliant flash.
British physicist Otto R. Frisch: Without a sound, the sun was shining—or so it looked. The sand hills at the edge of the desert were shimmering in a very bright light, almost colorless and shapeless. This light did not seem to change for a couple of seconds and then began to dim.
Nuclear physicist and radio chemist Emilio Segrè: In fact, in a very small fraction of a second, that light, at our distance from the explosion, could give a worse sunburn than exposure for a whole day on a sunny seashore. The thought passed my mind that maybe the atmosphere was catching fire, causing the end of the world, although I knew that that possibility had been carefully considered and ruled out.
British physicist Rudolf Peierls: We had known what to expect, but no amount of imagination could have given us a taste of the real thing.
Physicist Richard P. Feynman: This tremendous flash, so bright that I duck.
Los Alamos physicist Joan Hinton: It was like being at the bottom of an ocean of light. We were bathed in it from all directions.
Los Alamos physicist Marvin H. Wilkening: It was like being close to an old-fashioned photo flashbulb. If you were close enough, you could feel warmth because of the intense light, and the light from the explosion scattering from the mountains and the clouds was intense enough to feel.
Bainbridge: I felt the heat on the back of my neck, disturbingly warm.
Richards: Although facing away from ground zero, it felt like someone had slapped my face.
Kistiakowsky: I am sure that at the end of the world—in the last millisecond of the earth’s existence—the last man will see what we have just seen.
Nyer: I knew instantly that the whole thing was a success.
Physicist Lawrence H. Johnston: At count zero we dropped our parachute gauges. There was a flash as the bomb went off and we prepared for the shock wave to reach our microphones hanging in the air from the parachutes to be recorded. The flash was pretty bright, even at 20 miles. The white light lit the ceiling of our plane, faded to orange and disappeared. My immediate reaction was, “Thank God, my detonators worked!”
Hinton: The light withdrew into the bomb as if the bomb sucked it up.
Groves: Then as I turned, I saw the now familiar fireball.
Los Alamos physicist BoyceMcDaniel: The brilliant flash of an ever-growing sphere was followed by the billowing flame of an orange ball rising above the plain.
Frisch: That object on the horizon, which looked like a small sun, was still too bright to look at. I kept blinking and trying to take looks, and after another 10 seconds or so it had grown and dimmed into something more like a huge oil fire, with a structure that made it look a bit like a strawberry. It was slowly rising into the sky from the ground, with which it remained connected by a lengthening gray stem of swirling dust; incongruously, I thought of a red-hot elephant standing balanced on its trunk.
Farrell: Oppenheimer’s face relaxed into an expression of tremendous relief.
Laurence: I stood next to [physics Nobel laureate] Professor [James] Chadwick when the great moment for the neutron arrived. Never before in history had any man lived to see his own discovery materialize itself with such telling effect on the destiny of man, for the immediate present and all the generations to come. The infinitesimal neutron, to which the world paid little attention when its discovery was first announced, had cast its shadow over the entire earth and its inhabitants. He grunted, leaped lightly into the air, and was still again.
Groves: As [Vannevar] Bush [director of the Office of Scientific Research and Development], [Harvard President James] Conant, and I sat on the ground looking at this phenomenon, the first reactions of the three of us were expressed in a silent exchange of handclasps. We all arose so that by the time the shock wave arrived we were standing.
Fitch: It took the blast wave about 30 seconds. There was the initial loud report, the sharp gust of wind, and then the long period of reverberation as the sound waves echoed off the nearby mountains and came back to us.
Laurence: Out of the great silence came a mighty thunder.
Los Alamos theoretical physicist Edward Teller: Bill Laurence jumped and asked, “What was that?” It was, of course, the sound of the explosion. The sound waves had needed a couple of minutes to arrive at our spot 20 miles away.
Frisch: The bang came minutes later, quite loud though I had plugged my ears, and followed by a long rumble like heavy traffic very far away. I can still hear it.
Princeton physicist Robert R. Wilson: The memory I do have is when I took the dark glasses away, of seeing all the colors around and the sky lit up by the radiation—it was purple, kind of an aurora borealis light, and this thing like a big balloon expanding and going up. But the scale. There was this tremendous desert with the mountains nearby, but it seemed to make the mountains look small.
Laurence: For a fleeting instant the color was unearthly green, such as one sees only in the corona of the sun during a total eclipse. It was as though the earth had opened and the skies had split.
Hirschfelder: The fireball gradually turned from white to yellow to red as it grew in size and climbed in the sky; after about five seconds the darkness returned but with the sky and the air filled with a purple glow, just as though we were surrounded by an aurora borealis. For a matter of minutes we could follow the clouds containing radioactivity, which continued to glow with stria of this ethereal purple.
Christy: It was awe-inspiring. It just grew bigger and bigger, and it turned purple.
Hinton: It turned purple and blue and went up and up and up. We were still talking in whispers when the cloud reached the level where it was struck by the rising sunlight so it cleared out the natural clouds. We saw a cloud that was dark and red at the bottom and daylight on the top. Then suddenly the sound reached us. It was very sharp and rumbled and all the mountains were rumbling with it. We suddenly started talking out loud and felt exposed to the whole world.
Hirschfelder: There weren’t any agnostics watching this stupendous demonstration. Each, in his own way, knew that God had spoken.
Groves: Unknown to me and I think to everyone, [Nobel laureate physicist Enrico] Fermi was prepared to measure the blast by a very simple device.
Physicist Herbert L. Anderson: Fermi later related that he did not hear the sound of the explosion, so great was his concentration on the simple experiment he was performing: he dropped small pieces of paper and watched them fall.
Groves: There was no ground wind, so that when the shock wave hit it knocked some of the scraps several feet away.
Anderson: When the blast of the explosion hit them, it dragged them along, and they fell to the ground at some distance. He measured this distance and used the result to calculate the power of the explosion.
Groves: He was remarkably close to the calculations that were made later from the data accumulated by our complicated instruments.
Hirschfelder: Fermi’s paper strip showed that, in agreement with the expectation of the Theoretical Division, the energy yield of the atom bomb was equivalent to 20,000 tons of TNT. Professor [Isidor] Rabi, a frequent visitor to Los Alamos, won the pool on what the energy yield would be—he bet on the calculations of the Theoretical Division! None of us dared to make such a guess because we knew all of the guesstimates that went into the calculations and the tremendous precision which was required in the fabrication of the bomb.
Optical engineer and photographer Berlyn Brixner: The bomb had exceeded our greatest expectations.
Bainbridge: I had a feeling of exhilaration that the “gadget” had gone off properly followed by one of deep relief. I got up from the ground to congratulate Oppenheimer and others on the success of the implosion method. I finished by saying to Robert, “Now we are all sons of bitches.” Years later he recalled my words and wrote me, “We do not have to explain them to anyone.” I think that I will always respect his statement, although there have been some imaginative people who somehow can’t or won’t put the statement in context and get the whole interpretation. Oppenheimer told my younger daughter in 1966 that it was the best thing anyone said after the test.
Chicago physics grad student LeonaH.Woods: The light from Trinity was seen in towns as far as 180 miles away.
Physicist Luis Alvarez: Arthur Compton told of a lady who visited him after the war to thank him for restoring her family’s confidence in her sanity. She had visited her daughter in Los Angeles and was driving home across New Mexico early one morning to avoid the midday heat. She told her family that she saw the sun come up in the east, set, and then reappear at the normal time for sunrise. Everyone was sure that Grandma had lost her marbles, until the story of the Trinity shot was reported in the newspapers on August 4, 1945.
Elsie McMillan: [At home in Los Alamos,] I had to try to get some more sleep. There was a light tap on my door. There stood Lois Bradbury, my friend and neighbor. She knew. Her husband [physicist Norris] was out there too. She said her children were asleep and would be all right since she was so close and could check on them every so often. “Please, can’t we stay together this long night?” she said. We talked of many things, of our men, whom we loved so much. Of the children, their futures. Of the war with all its horrors. Lois watched out of the window. It was 5:15 a.m. and we began to wonder. Had weather conditions been wrong? Had it been a dud? I sat at the window feeding [physicist] Ed’s and my baby. Lois stood staring out. There was such quiet in that room. Suddenly there was a flash and the whole sky lit up. The time was 5:30 a.m. The baby didn’t notice. We were too fearful and awed to speak. We looked at each other. It was a success.
Woods: The most important problem given to Herb was to measure the yield of the Trinity test of the plutonium bomb. Herb converted some Army tanks with thick steel shielding to drive out into the desert after the Trinity explosion for scooping up samples of surface dirt. After the successful firing at Trinity, the tanks scooped up desert sand now melted to glass, containing and also covered with fallout.
Anderson: The method worked well. The result was important. It helped decide at what height the bomb should be exploded.
Farrell: All seemed to feel that they had been present at the birth of a new age.
Los Alamos Director J. Robert Oppenheimer: We knew the world would not be the same. A few people laughed, a few people cried. Most people were silent. I remembered the line from the Hindu scripture, the Bhagavad-Gita; Vishnu [a principal Hindu deity] is trying to persuade the prince that he should do his duty, and to impress him, takes on his multi-armed form and says, “Now I have become death, the destroyer of the worlds.” I suppose we all thought that, one way or another.
Kistiakowsky: I slapped Oppenheimer on the back and said, “Oppie, you owe me $10 dollars” because in that desperate period when I was being accused as the world’s worst villain, who would be forever damned by the physicists for failing the project, I said to Oppenheimer, “I bet you my whole month’s salary against $10 dollars that implosion will work.” I still have that bill, with Oppenheimer’s signature.
Groves: Shortly after the explosion, Farrell and Oppenheimer returned by jeep to the base camp, with a number of the others who had been at the dugout. When Farrell came up to me, his first words were, “The war is over.” My reply was, “Yes, after we drop two bombs on Japan.” I congratulated Oppenheimer quietly with “I am proud of all of you,” and he replied with a simple “Thank you.” We were both, I am sure, already thinking of the future.
Oppenheimer: It was a success.
Hirschfelder: If atom bombs were feasible, then we were glad that it was we, and not our enemy, who had succeeded.
Teller: As the sun rose on July 16, some of the worst horrors of modern history—the Holocaust and its extermination camps, the destruction of Hamburg, Dresden, and Tokyo by fire-bombing, and all the personal savagery of the fighting throughout the world—were already common knowledge. Even without an atomic bomb, 1945 would have provided the capstone for a period of the worst inhumanities in modern history. People still ask, with the wisdom of hindsight: “Didn’t you realize what you were doing when you worked on the atomic bomb?” My reply is that I do not believe that any of us who worked on the bomb were without some thoughts about its possible consequences. But I would add: How could anyone who lived through that year look at the question of the atomic bomb’s effects without looking at many other questions? The year 1945 was a melange of events and questions, many of great emotional intensity, few directly related, all juxtaposed. Where is the person who can draw a reasonable lesson or a moral conclusion from the disparate events that took place around the end of World War II?
Cmdr. Norris Bradbury, physicist and head of E-5, the Implosion Experimentation Group: Some people claim to have wondered at the time about the future of mankind. I didn’t. We were at war, and the damned thing worked.
The U.S. Department of Energy’s National Nuclear Security Administration (DOE/NNSA) recently announced that it has selected MIT to establish a new research center dedicated to advancing the predictive simulation of extreme environments, such as those encountered in hypersonic flight and atmospheric re-entry. The center will be part of the fourth phase of NNSA's Predictive Science Academic Alliance Program (PSAAP-IV), which supports frontier research advancing the predictive capabilities of high-performance computing for open science and engineering applications relevant to national security mission spaces.
“CHEFSI will capitalize on MIT’s deep strengths in predictive modeling, high-performance computing, and STEM education to help ensure the United States remains at the forefront of scientific and technological innovation,” says Ian A. Waitz, MIT’s vice president for research. “The center’s particular relevance to national security and advanced technologies exemplifies MIT’s commitment to advancing research with broad societal benefit.”
CHEFSI is one of five new Predictive Simulation Centers announced by the NNSA as part of a program expected to provide up to $17.5 million to each center over five years.
CHEFSI’s research aims to couple detailed simulations of high-enthalpy gas flows with models of the chemical, thermal, and mechanical behavior of solid materials, capturing phenomena such as oxidation, nitridation, ablation, and fracture. Advanced computational models — validated by carefully designed experiments — can address the limitations of flight testing by providing critical insights into material performance and failure.
“By integrating high-fidelity physics models with artificial intelligence-based surrogate models, experimental validation, and state-of-the-art exascale computational tools, CHEFSI will help us understand and predict how thermal protection systems perform under some of the harshest conditions encountered in engineering systems,” says Raúl Radovitzky, the Jerome C. Hunsaker Professor of Aeronautics and Astronautics, associate director of the ISN, and director of CHEFSI. “This knowledge will help in the design of resilient systems for applications ranging from reusable spacecraft to hypersonic vehicles.”
Radovitzky will be joined on the center’s leadership team by Youssef Marzouk, the Breene M. Kerr (1951) Professor of Aeronautics and Astronautics, co-director of the MIT Center for Computational Science and Engineering (CCSE), and recently named the associate dean of the MIT Schwarzman College of Computing; and Nicolas Hadjiconstantinou, the Quentin Berg (1937) Professor of Mechanical Engineering and co-director of CCSE, who will serve as associate directors. The center co-principal investigators include MIT faculty members across the departments of Aeronautics and Astronautics, Electrical Engineering and Computer Science, Materials Science and Engineering, Mathematics, and Mechanical Engineering. Franklin Hadley will lead center operations, with administration and finance under the purview of Joshua Freedman. Hadley and Freedman are both members of the ISN headquarters team.
CHEFSI expects to collaborate extensively with the DoE/NNSA national laboratories — Lawrence Livermore National Laboratory, Los Alamos National Laboratory, and Sandia National Laboratories — and, in doing so, offer graduate students and postdocs immersive research experiences and internships at these facilities.
Researchers in the new MIT Center for the Exascale Simulation of Coupled High-Enthalpy Fluid–Solid Interactions will study environments and material interactions such as those involved in the reentry of spacecraft into Earth's atmosphere.
The U.S. Department of Energy’s National Nuclear Security Administration (DOE/NNSA) recently announced that it has selected MIT to establish a new research center dedicated to advancing the predictive simulation of extreme environments, such as those encountered in hypersonic flight and atmospheric re-entry. The center will be part of the fourth phase of NNSA's Predictive Science Academic Alliance Program (PSAAP-IV), which supports frontier research advancing the predictive capabilities of high-performance computing for open science and engineering applications relevant to national security mission spaces.
“CHEFSI will capitalize on MIT’s deep strengths in predictive modeling, high-performance computing, and STEM education to help ensure the United States remains at the forefront of scientific and technological innovation,” says Ian A. Waitz, MIT’s vice president for research. “The center’s particular relevance to national security and advanced technologies exemplifies MIT’s commitment to advancing research with broad societal benefit.”
CHEFSI is one of five new Predictive Simulation Centers announced by the NNSA as part of a program expected to provide up to $17.5 million to each center over five years.
CHEFSI’s research aims to couple detailed simulations of high-enthalpy gas flows with models of the chemical, thermal, and mechanical behavior of solid materials, capturing phenomena such as oxidation, nitridation, ablation, and fracture. Advanced computational models — validated by carefully designed experiments — can address the limitations of flight testing by providing critical insights into material performance and failure.
“By integrating high-fidelity physics models with artificial intelligence-based surrogate models, experimental validation, and state-of-the-art exascale computational tools, CHEFSI will help us understand and predict how thermal protection systems perform under some of the harshest conditions encountered in engineering systems,” says Raúl Radovitzky, the Jerome C. Hunsaker Professor of Aeronautics and Astronautics, associate director of the ISN, and director of CHEFSI. “This knowledge will help in the design of resilient systems for applications ranging from reusable spacecraft to hypersonic vehicles.”
Radovitzky will be joined on the center’s leadership team by Youssef Marzouk, the Breene M. Kerr (1951) Professor of Aeronautics and Astronautics, co-director of the MIT Center for Computational Science and Engineering (CCSE), and recently named the associate dean of the MIT Schwarzman College of Computing; and Nicolas Hadjiconstantinou, the Quentin Berg (1937) Professor of Mechanical Engineering and co-director of CCSE, who will serve as associate directors. The center co-principal investigators include MIT faculty members across the departments of Aeronautics and Astronautics, Electrical Engineering and Computer Science, Materials Science and Engineering, Mathematics, and Mechanical Engineering. Franklin Hadley will lead center operations, with administration and finance under the purview of Joshua Freedman. Hadley and Freedman are both members of the ISN headquarters team.
CHEFSI expects to collaborate extensively with the DoE/NNSA national laboratories — Lawrence Livermore National Laboratory, Los Alamos National Laboratory, and Sandia National Laboratories — and, in doing so, offer graduate students and postdocs immersive research experiences and internships at these facilities.
Researchers in the new MIT Center for the Exascale Simulation of Coupled High-Enthalpy Fluid–Solid Interactions will study environments and material interactions such as those involved in the reentry of spacecraft into Earth's atmosphere.
The following article is adapted from a press release issued by the Laser Interferometer Gravitational-wave Observatory (LIGO) Laboratory. LIGO is funded by the National Science Foundation and operated by Caltech and MIT, which conceived and built the project.
On Sept. 14, 2015, a signal arrived on Earth, carrying information about a pair of remote black holes that had spiraled together and merged. The signal had traveled about 1.3 billion years to reach us at the speed of light — but it was not made of light. It was a different kind of signal: a quivering of space-time called gravitational waves first predicted by Albert Einstein 100 years prior. On that day 10 years ago, the twin detectors of the U.S. National Science Foundation Laser Interferometer Gravitational-wave Observatory (NSF LIGO) made the first-ever direct detection of gravitational waves, whispers in the cosmos that had gone unheard until that moment.
The historic discovery meant that researchers could now sense the universe through three different means. Light waves, such as X-rays, optical, radio, and other wavelengths of light, as well as high-energy particles called cosmic rays and neutrinos, had been captured before, but this was the first time anyone had witnessed a cosmic event through the gravitational warping of space-time. For this achievement, first dreamed up more than 40 years prior, three of the team’s founders won the 2017 Nobel Prize in Physics: MIT’s Rainer Weiss, professor emeritus of physics (who recently passed away at age 92); Caltech’s Barry Barish, the Ronald and Maxine Linde Professor of Physics, Emeritus; and Caltech’s Kip Thorne, the Richard P. Feynman Professor of Theoretical Physics, Emeritus.
Today, LIGO, which consists of detectors in both Hanford, Washington, and Livingston, Louisiana, routinely observes roughly one black hole merger every three days. LIGO now operates in coordination with two international partners, the Virgo gravitational-wave detector in Italy and KAGRA in Japan. Together, the gravitational-wave-hunting network, known as the LVK (LIGO, Virgo, KAGRA), has captured a total of about 300 black hole mergers, some of which are confirmed while others await further analysis. During the network’s current science run, the fourth since the first run in 2015, the LVK has discovered more than 200 candidate black hole mergers, more than double the number caught in the first three runs.
The dramatic rise in the number of LVK discoveries over the past decade is owed to several improvements to their detectors — some of which involve cutting-edge quantum precision engineering. The LVK detectors remain by far the most precise rulers for making measurements ever created by humans. The space-time distortions induced by gravitational waves are incredibly miniscule. For instance, LIGO detects changes in space-time smaller than 1/10,000 the width of a proton. That’s 1/700 trillionth the width of a human hair.
“Rai Weiss proposed the concept of LIGO in 1972, and I thought, ‘This doesn’t have much chance at all of working,’” recalls Thorne, an expert on the theory of black holes. “It took me three years of thinking about it on and off and discussing ideas with Rai and Vladimir Braginsky [a Russian physicist], to be convinced this had a significant possibility of success. The technical difficulty of reducing the unwanted noise that interferes with the desired signal was enormous. We had to invent a whole new technology. NSF was just superb at shepherding this project through technical reviews and hurdles.”
Nergis Mavalvala, the Curtis and Kathleen Marble Professor of Astrophysics at MIT and dean of the MIT School of Science, says that the challenges the team overcame to make the first discovery are still very much at play. “From the exquisite precision of the LIGO detectors to the astrophysical theories of gravitational-wave sources, to the complex data analyses, all these hurdles had to be overcome, and we continue to improve in all of these areas,” Mavalvala says. “As the detectors get better, we hunger for farther, fainter sources. LIGO continues to be a technological marvel.”
The clearest signal yet
LIGO’s improved sensitivity is exemplified in a recent discovery of a black hole merger referred to as GW250114. (The numbers denote the date the gravitational-wave signal arrived at Earth: January 14, 2025.) The event was not that different from LIGO’s first-ever detection (called GW150914) — both involve colliding black holes about 1.3 billion light-years away with masses between 30 to 40 times that of our sun. But thanks to 10 years of technological advances reducing instrumental noise, the GW250114 signal is dramatically clearer.
“We can hear it loud and clear, and that lets us test the fundamental laws of physics,” says LIGO team member Katerina Chatziioannou, Caltech assistant professor of physics and William H. Hurt Scholar, and one of the authors of a new study on GW250114 published in the Physical Review Letters.
By analyzing the frequencies of gravitational waves emitted by the merger, the LVK team provided the best observational evidence captured to date for what is known as the black hole area theorem, an idea put forth by Stephen Hawking in 1971 that says the total surface areas of black holes cannot decrease. When black holes merge, their masses combine, increasing the surface area. But they also lose energy in the form of gravitational waves. Additionally, the merger can cause the combined black hole to increase its spin, which leads to it having a smaller area. The black hole area theorem states that despite these competing factors, the total surface area must grow in size.
Later, Hawking and physicist Jacob Bekenstein concluded that a black hole’s area is proportional to its entropy, or degree of disorder. The findings paved the way for later groundbreaking work in the field of quantum gravity, which attempts to unite two pillars of modern physics: general relativity and quantum physics.
In essence, the LIGO detection allowed the team to “hear” two black holes growing as they merged into one, verifying Hawking’s theorem. (Virgo and KAGRA were offline during this particular observation.) The initial black holes had a total surface area of 240,000 square kilometers (roughly the size of Oregon), while the final area was about 400,000 square kilometers (roughly the size of California) — a clear increase. This is the second test of the black hole area theorem; an initial test was performed in 2021 using data from the first GW150914 signal, but because that data were not as clean, the results had a confidence level of 95 percent compared to 99.999 percent for the new data.
Thorne recalls Hawking phoning him to ask whether LIGO might be able to test his theorem immediately after he learned of the 2015 gravitational-wave detection. Hawking died in 2018 and sadly did not live to see his theory observationally verified. “If Hawking were alive, he would have reveled in seeing the area of the merged black holes increase,” Thorne says.
The trickiest part of this type of analysis had to do with determining the final surface area of the merged black hole. The surface areas of pre-merger black holes can be more readily gleaned as the pair spiral together, roiling space-time and producing gravitational waves. But after the black holes coalesce, the signal is not as clear-cut. During this so-called ringdown phase, the final black hole vibrates like a struck bell.
In the new study, the researchers precisely measured the details of the ringdown phase, which allowed them to calculate the mass and spin of the black hole and, subsequently, determine its surface area. More specifically, they were able, for the first time, to confidently pick out two distinct gravitational-wave modes in the ringdown phase. The modes are like characteristic sounds a bell would make when struck; they have somewhat similar frequencies but die out at different rates, which makes them hard to identify. The improved data for GW250114 meant that the team could extract the modes, demonstrating that the black hole’s ringdown occurred exactly as predicted by math models based on the Teukolsky formalism — devised in 1972 by Saul Teukolsky, now a professor at Caltech and Cornell University.
Another study from the LVK, submitted to Physical Review Letters today, places limits on a predicted third, higher-pitched tone in the GW250114 signal, and performs some of the most stringent tests yet of general relativity’s accuracy in describing merging black holes.
“A decade of improvements allowed us to make this exquisite measurement,” Chatziioannou says. “It took both of our detectors, in Washington and Louisiana, to do this. I don’t know what will happen in 10 more years, but in the first 10 years, we have made tremendous improvements to LIGO’s sensitivity. This not only means we are accelerating the rate at which we discover new black holes, but we are also capturing detailed data that expand the scope of what we know about the fundamental properties of black holes.”
Jenne Driggers, detection lead senior scientist at LIGO Hanford, adds, “It takes a global village to achieve our scientific goals. From our exquisite instruments, to calibrating the data very precisely, vetting and providing assurances about the fidelity of the data quality, searching the data for astrophysical signals, and packaging all that into something that telescopes can read and act upon quickly, there are a lot of specialized tasks that come together to make LIGO the great success that it is.”
Pushing the limits
LIGO and Virgo have also unveiled neutron stars over the past decade. Like black holes, neutron stars form from the explosive deaths of massive stars, but they weigh less and glow with light. Of note, in August 2017, LIGO and Virgo witnessed an epic collision between a pair of neutron stars — a kilonova — that sent gold and other heavy elements flying into space and drew the gaze of dozens of telescopes around the world, which captured light ranging from high-energy gamma rays to low-energy radio waves. The “multi-messenger” astronomy event marked the first time that both light and gravitational waves had been captured in a single cosmic event. Today, the LVK continues to alert the astronomical community to potential neutron star collisions, who then use telescopes to search the skies for signs of kilonovae.
“The LVK has made big strides in recent years to make sure we’re getting high-quality data and alerts out to the public in under a minute, so that astronomers can look for multi-messenger signatures from our gravitational-wave candidates,” Driggers says.
“The global LVK network is essential to gravitational-wave astronomy,” says Gianluca Gemme, Virgo spokesperson and director of research at the National Institute of Nuclear Physics in Italy. “With three or more detectors operating in unison, we can pinpoint cosmic events with greater accuracy, extract richer astrophysical information, and enable rapid alerts for multi-messenger follow-up. Virgo is proud to contribute to this worldwide scientific endeavor.”
Other LVK scientific discoveries include the first detection of collisions between one neutron star and one black hole; asymmetrical mergers, in which one black hole is significantly more massive than its partner black hole; the discovery of the lightest black holes known, challenging the idea that there is a “mass gap” between neutron stars and black holes; and the most massive black hole merger seen yet with a merged mass of 225 solar masses. For reference, the previous record holder for the most massive merger had a combined mass of 140 solar masses.
Even in the decades before LIGO began taking data, scientists were building foundations that made the field of gravitational-wave science possible. Breakthroughs in computer simulations of black hole mergers, for example, allow the team to extract and analyze the feeble gravitational-wave signals generated across the universe.
LIGO’s technological achievements, beginning as far back as the 1980s, include several far-reaching innovations, such as a new way to stabilize lasers using the so-called Pound–Drever–Hall technique. Invented in 1983 and named for contributing physicists Robert Vivian Pound, the late Ronald Drever of Caltech (a founder of LIGO), and John Lewis Hall, this technique is widely used today in other fields, such as the development of atomic clocks and quantum computers. Other innovations include cutting-edge mirror coatings that almost perfectly reflect laser light; “quantum squeezing” tools that enable LIGO to surpass sensitivity limits imposed by quantum physics; and new artificial intelligence methods that could further hush certain types of unwanted noise.
“What we are ultimately doing inside LIGO is protecting quantum information and making sure it doesn’t get destroyed by external factors,” Mavalvala says. “The techniques we are developing are pillars of quantum engineering and have applications across a broad range of devices, such as quantum computers and quantum sensors.”
In the coming years, the scientists and engineers of LVK hope to further fine-tune their machines, expanding their reach deeper and deeper into space. They also plan to use the knowledge they have gained to build another gravitational-wave detector, LIGO India. Having a third LIGO observatory would greatly improve the precision with which the LVK network can localize gravitational-wave sources.
Looking farther into the future, the team is working on a concept for an even larger detector, called Cosmic Explorer, which would have arms 40 kilometers long. (The twin LIGO observatories have 4-kilometer arms.) A European project, called Einstein Telescope, also has plans to build one or two huge underground interferometers with arms more than 10 kilometers long. Observatories on this scale would allow scientists to hear the earliest black hole mergers in the universe.
“Just 10 short years ago, LIGO opened our eyes for the first time to gravitational waves and changed the way humanity sees the cosmos,” says Aamir Ali, a program director in the NSF Division of Physics, which has supported LIGO since its inception. “There’s a whole universe to explore through this completely new lens and these latest discoveries show LIGO is just getting started.”
The LIGO-Virgo-KAGRA Collaboration
LIGO is funded by the U.S. National Science Foundation and operated by Caltech and MIT, which together conceived and built the project. Financial support for the Advanced LIGO project was led by NSF with Germany (Max Planck Society), the United Kingdom (Science and Technology Facilities Council), and Australia (Australian Research Council) making significant commitments and contributions to the project. More than 1,600 scientists from around the world participate in the effort through the LIGO Scientific Collaboration, which includes the GEO Collaboration. Additional partners are listed at my.ligo.org/census.php.
The Virgo Collaboration is currently composed of approximately 1,000 members from 175 institutions in 20 different (mainly European) countries. The European Gravitational Observatory (EGO) hosts the Virgo detector near Pisa, Italy, and is funded by the French National Center for Scientific Research, the National Institute of Nuclear Physics in Italy, the National Institute of Subatomic Physics in the Netherlands, The Research Foundation – Flanders, and the Belgian Fund for Scientific Research. A list of the Virgo Collaboration groups can be found on the project website.
KAGRA is the laser interferometer with 3-kilometer arm length in Kamioka, Gifu, Japan. The host institute is the Institute for Cosmic Ray Research of the University of Tokyo, and the project is co-hosted by the National Astronomical Observatory of Japan and the High Energy Accelerator Research Organization. The KAGRA collaboration is composed of more than 400 members from 128 institutes in 17 countries/regions. KAGRA’s information for general audiences is at the website gwcenter.icrr.u-tokyo.ac.jp/en/. Resources for researchers are accessible at gwwiki.icrr.u-tokyo.ac.jp/JGWwiki/KAGRA.
This illustration portrays GW250114, a powerful collision between two black holes recently observed in gravitational waves. Ten years after LIGO’s landmark detection of the first gravitational waves, the observatory’s improved detectors allowed it to “hear” this celestial collision with unprecedented clarity. Though only LIGO was online during GW250114, it now routinely operates as part of a network with other gravitational-wave detectors, including Europe’s Virgo and Japan’s KAGRA.
A new study from MIT neuroscientists reveals how rare variants of a gene called ABCA7 may contribute to the development of Alzheimer’s in some of the people who carry it.
Dysfunctional versions of the ABCA7 gene, which are found in a very small proportion of the population, contribute strongly to Alzheimer’s risk. In the new study, the researchers discovered that these mutations can disrupt the metabolism of lipids that play an important role in cell membranes.
This disruption makes neurons hyperexcitable and leads them into a stressed state that can damage DNA and other cellular components. These effects, the researchers found, could be reversed by treating neurons with choline, an important building block precursor needed to make cell membranes.
“We found pretty strikingly that when we treated these cells with choline, a lot of the transcriptional defects were reversed. We also found that the hyperexcitability phenotype and elevated amyloid beta peptides that we observed in neurons that lost ABCA7 was reduced after treatment,” says Djuna von Maydell, an MIT graduate student and the lead author of the study.
Li-Huei Tsai, director of MIT’s Picower Institute for Learning and Memory and the Picower Professor in the MIT Department of Brain and Cognitive Sciences, is the senior author of the paper, which appears today in Nature.
Membrane dysfunction
Genomic studies of Alzheimer’s patients have found that people who carry variants of ABCA7 that generate reduced levels of functional ABCA7 protein have about double the odds of developing Alzheimer’s as people who don’t have those variants.
ABCA7 encodes a protein that transports lipids across cell membranes. Lipid metabolism is also the primary target of a more common Alzheimer’s risk factor known as APOE4. In previous work, Tsai’s lab has shown that APOE4, which is found in about half of all Alzheimer’s patients, disrupts brain cells’ ability to metabolize lipids and respond to stress.
To explore how ABCA7 variants might contribute to Alzheimer’s risk, the researchers obtained tissue samples from the Religious Orders Study/Memory and Aging Project (ROSMAP), a longitudinal study that has tracked memory, motor, and other age-related changes in older people since 1994. Of about 1,200 samples in the dataset that had genetic information available, the researchers obtained 12 from people who carried a rare variant of ABCA7.
The researchers performed single-cell RNA sequencing of neurons from these ABCA7 carriers, allowing them to determine which other genes are affected when ABCA7 is missing. They found that the most significantly affected genes fell into three clusters related to lipid metabolism, DNA damage, and oxidative phosphorylation (the metabolic process that cells use to capture energy as ATP).
To investigate how those alterations could affect neuron function, the researchers introduced ABCA7 variants into neurons derived from induced pluripotent stem cells.
These cells showed many of the same gene expression changes as the cells from the patient samples, especially among genes linked to oxidative phosphorylation. Further experiments showed that the “safety valve” that normally lets mitochondria limit excess build-up of electrical charge was less active. This can lead to oxidative stress, a state that occurs when too many cell-damaging free radicals build up in tissues.
Using these engineered cells, the researchers also analyzed the effects of ABCA7 variants on lipid metabolism. Cells with the variants altered metabolism of a molecule called phosphatidylcholine, which could lead to membrane stiffness and may explain why the mitochondrial membranes of the cells were unable to function normally.
A boost in choline
Those findings raised the possibility that intervening in phosphatidylcholine metabolism might reverse some of the cellular effects of ABCA7 loss. To test that idea, the researchers treated neurons with ABCA7 mutations with a molecule called CDP-choline, a precursor of phosphatidylcholine.
As these cells began producing new phosphatidylcholine (both saturated and unsaturated forms), their mitochondrial membrane potentials also returned to normal, and their oxidative stress levels went down.
The researchers then used induced pluripotent stem cells to generate 3D tissue organoids made of neurons with the ABCA7 variant. These organoids developed higher levels of amyloid beta proteins, which form the plaques seen in the brains of Alzheimer’s patients. However, those levels returned to normal when the organoids were treated with CDP-choline. The treatment also reduced neurons’ hyperexcitability.
In a 2021 paper, Tsai’s lab found that CDP-choline treatment could also reverse many of the effects of another Alzheimer’s-linked gene variant, APOE4, in mice. She is now working with researchers at the University of Texas and MD Anderson Cancer Center on a clinical trial exploring how choline supplements affect people who carry the APOE4 gene.
Choline is naturally found in foods such as eggs, meat, fish, and some beans and nuts. Boosting choline intake with supplements may offer a way for many people to reduce their risk of Alzheimer’s disease, Tsai says.
“From APOE4 to ABCA7 loss of function, my lab demonstrates that disruption of lipid homeostasis leads to the development of Alzheimer’s-related pathology, and that restoring lipid homeostasis, such as through choline supplementation, can ameliorate these pathological phenotypes,” she says.
In addition to the rare variants of ABCA7 that the researchers studied in this paper, there is also a more common variant that is found at a frequency of about 18 percent in the population. This variant was thought to be harmless, but the MIT team showed that cells with this variant exhibited many of the same gene alterations in lipid metabolism that they found in cells with the rare ABCA7 variants.
“There’s more work to be done in this direction, but this suggests that ABCA7 dysfunction might play an important role in a much larger part of the population than just people who carry the rare variants,” von Maydell says.
The research was funded, in part, by the Cure Alzheimer’s Fund, the Freedom Together Foundation, the Carol and Gene Ludwig Family Foundation, James D. Cook, and the National Institutes of Health.
In the Alzheimer’s affected brain, abnormal levels of the beta-amyloid protein clump together to form plaques (seen in brown) that collect between neurons and disrupt cell function. Abnormal collections of the tau protein accumulate and form tangles (seen in blue) within neurons, harming synaptic communication between nerve cells.
According to ETH Zurich climate researchers, greenhouse gas emissions from major fossil fuel and cement producers are significant contributors to the occurrence and intensity of heat waves. These findings have been published in a new study in the journal Nature.
Over 300 students from across NUS gathered for “Building an Innovation Mindset” on 20 August 2025, a dynamic half-day event under the NUSOne initiative, hosted by NUS Enterprise in collaboration with the College of Design and Engineering (CDE) and the Office of the Provost. More than a theoretical introduction to entrepreneurship, the event offered an experiential journey, showing students how curiosity can be transformed into action, and how ideas can drive tangible, real-world impact.
Engineering innovation with entrepreneurial thinking
In the “Ideas to Innovation” lecture by Associate Professor Khoo Eng Tat, Assistant Dean (Research & Technology) at CDE, students were encouraged to view engineering (and learning) not just as calculations and prototypes, but as a platform for innovation and enterprise. His message: Entrepreneurial mindset is not innate – it is cultivated through questioning assumptions, embracing experimentation, and most importantly, learning from failure.
Luis Olguin Reyes, a Year 4 student at NUS Business School, aptly shared: “Failure is part of success, like in everything, but especially in entrepreneurship.” The sentiment captured the spirit of the lecture, reminding students that setbacks are not endpoints, but stepping stones for learning.
Founders’ fireside chat: Turning passion into purpose
A highlight of the afternoon was the panel discussion on “How Founders Bring Passion to Life”,moderated byRemi Choong, partner at deep-tech venture capital firm Elev8.vc. Three NUS Overseas Colleges (NOC) alumni also shared their journeys:
· M. Ibnur Rashad, Founder & CEO of Ground-Up Innovation Labs for Development (GUILD)
· Chang Qingyang, Co-founder & CEO of ConcreteAI
Their stories were not polished pitches but honest reflections on long nights, bold pivots, and repeated failures. Through these raw insights, students saw how resilience, curiosity and adaptability are the real engines behind innovation.
Students left inspired to take bold actions, embrace diverse perspectives, and approach challenges with creativity. For Leonard Goh, Year 2 student at NUS School of Computing, he found the session eye-opening and quoted M. Ibnur Rashad: “Failure is the norm in entrepreneurship, and it should be expected, not avoided.”
For Reiner Ong, Year 2 student at NUS CDE, this was an eye-opener. She reflected, “I’ve always been afraid to start my own entrepreneurial journey because I feared setbacks. But after today’s panel, I’ve learned that failure is just part of the journey, and we must embrace and learn from it.”
Building mindsets, block by block
The event closed with “Problem Solving Piece by Piece”, a hands-on challenge using the SOMA cube, a classic spatial puzzle made of interlocking pieces. The activity sparked laughter, collaboration, and creative problem-solving, teaching participants the value of persistence, teamwork, and breaking complex challenges into manageable steps, key habits for building an entrepreneurial mindset.
Neryss Ho, Year 2 student at CDE, shared, “We were tasked to build the tallest tower using just a few 3D blocks, and it was so much fun thinking of creative ways to solve the problem. It was definitely my favourite activity.”
The activity underscored a vital truth: complex problems can be tackled when broken into parts - mirroring the process of entrepreneurial problem-solving in the real world.
Key takeaways: Planting the seeds of an entrepreneurial future
By the end of the event, one thing was clear: Entrepreneurship isn’t just about launching start-ups – it is a mindset. It is about cultivating curiosity and courage to ask “What if?”, and about embracing failure as a learning opportunity. It is about turning problems into possibilities, and taking the first small step toward something bigger.
Many students walked away with concrete goals, from solving one real-world problem a month to forming project teams with peers from different faculties.
“Building an Innovation Mindset” was more than a typical classroom lesson. It offered students not only a window into how ventures are built, but also personal growth in building confidence, skills, and an entrepreneurial mindset to take their first steps toward launching their own innovations and turning ideas into tangible impact.
In a study in mice, researchers have identified genes associated with the dramatic transformation of the mammary gland in pregnancy, breastfeeding, and after breastfeeding as it returns to its resting state.
The mammary gland is made up of different cell types, each with a different function - such as fat cells that provide structural support, and basal cells that are crucial for milk ejection.
The team analysed the cellular composition of the mammary gland at ten different time-points from before the first pregnancy, during pregnancy, during breastfeeding, and during a process called involution when the breast tissue is remodelled to its resting state. The mix of cell types changes dramatically through this cycle.
By measuring gene expression in the mammary gland over the same time-points, the researchers were able to link specific genes to their functions at different stages of the developmental cycle.
“Our atlas is the most detailed to date, allowing us to see which genes are expressed in which cell types at each stage of the adult mammary gland cycle,” said Dr Geula Hanin, a researcher in the University of Cambridge’s Department of Genetics, first author of the report.
The team found that genes associated with breastfeeding disorders such as insufficient milk supply are active not only in the breast cells that produce milk, but also in other cells such as basal cells - which squeeze out the milk as the infant is suckling. This suggests that in some instances, a mechanical problem - rather than a milk production problem - could be the cause and provides a new cell target for investigation.
The study also found that genes associated with postpartum breast cancer become active immediately after weaning in various cell types - including in fat cells, which have previously been overlooked as contributors to breast cancer linked to childbirth. This offers a future potential target for early detection or prevention strategies.
Hanin said: “We’ve found that genes associated with problems in milk production, often experienced by breastfeeding mothers, are acting in breast cells that weren’t previously considered relevant for milk production. We’ve found genes associated with postpartum breast cancer acting in cells that have been similarly overlooked.
“This work provides many potential new ways of transforming maternal and infant health, by using genetic information to both predict problems with breastfeeding and breast cancer, and to tackle them further down the line.”
Breastfeeding affects lifelong health, for example breast-fed babies are less likely to become obese and diabetic. Yet one in twenty women have breastfeeding difficulties, and despite its importance this is a greatly understudied area of women’s health.
Postpartum breast cancer occurs within five to ten years of giving birth and is linked to hormonal fluctuations, natural tissue remodelling, and the changing environment of the mammary gland during involution that makes it more susceptible to malignancy.
The researchers also focused on ‘imprinted genes’- that is, genes that are switched on or off depending on whether they are inherited from the mother or the father. Imprinted genes in the placenta are known to regulate growth and development of the baby in the womb.
The team identified 25 imprinted genes that are active in the adult mammary gland at precise times during the development cycle. These appear to orchestrate a tightly controlled system for managing milk production and breast tissue changes during motherhood.
Some functions of the genes themselves have been identified in previous studies. This new work provides a detailed understanding of when, and where, the genes become active to cause changes in mammary gland function during its adult development cycle.
“Breastfeeding is a fundamental process that’s common to all mammals; we wouldn’t have survived without it. I hope this work will lead to new ways to support mothers who have issues with breastfeeding, so they have a better chance of succeeding,” said Hanin.
The research was funded primarily by the Medical Research Council.
A University of Cambridge study of adult mammary gland development has revealed new genes involved in breastfeeding, and provided insights into how genetic changes may be associated with breastfeeding disorders and postpartum breast cancers.
This work provides many potential new ways of transforming maternal and infant health, by using genetic information to both predict problems...and to tackle them further down the line.
Princeton President Christopher L. Eisgruber has issued a statement following the release of graduate student Elizabeth Tsurkov, who was kidnapped in Iraq in 2023.
Seven technologies developed at MIT Lincoln Laboratory, either wholly or with collaborators, have earned 2025 R&D 100 Awards. This annual awards competition recognizes the year's most significant new technologies, products, and materials available on the marketplace or transitioned to use. An independent panel of technology experts and industry professionals selects the winners.
"Winning an R&D 100 Award is a recognition of the exceptional creativity and effort of our scientists and engineers. The awarded technologies reflect Lincoln Laboratory's mission to transform innovative ideas into real-world solutions for U.S. national security, industry, and society," says Melissa Choi, director of Lincoln Laboratory.
Lincoln Laboratory's winning technologies enhance national security in a range of ways, from securing satellite communication links and identifying nearby emitting devices to providing a layer of defense for U.S. Army vehicles and protecting service members from chemical threats. Other technologies are pushing frontiers in computing, enabling the 3D integration of chips and the close inspection of superconducting electronics. Industry is also benefiting from these developments — for example, by adopting an architecture that streamlines the development of laser communications terminals.
The online publication R&D World manages the awards program. Recipients span Fortune 500 companies, federally funded research institutions, academic and government labs, and small companies. Since 2010, Lincoln Laboratory has received 108 R&D 100 Awards.
Protecting lives
Tactical Optical Spherical Sensor for Interrogating Threats (TOSSIT) is a throwable, baseball-sized sensor that remotely detects hazardous vapors and aerosols. It is designed to alert soldiers, first responders, and law enforcement to the presence of chemical threats, like nerve and blister agents, industrial chemical accidents, or fentanyl dust. Users can simply toss, drone-drop, or launch TOSSIT into an area of concern. To detect specific chemicals, the sensor samples the air with a built-in fan and uses an internal camera to observe color changes on a removable dye card. If chemicals are present, TOSSIT alerts users wirelessly on an app or via audible, light-up, or vibrational alarms in the sensor.
"TOSSIT fills an unmet need for a chemical-vapor point sensor, one that senses the immediate environment around it, that can be kinetically deployed ahead of service personnel. It provides a low-cost sensing option for vapors and solid aerosol threats — think toxic dust particles — that would otherwise not be detectable by small deployed sensor systems,” says principal investigator Richard Kingsborough. TOSSIT has been tested extensively in the field and is currently being transferred to the military.
Wideband Selective Propagation Radar (WiSPR) is an advanced radar and communications system developed to protect U.S. Army armored vehicles. The system's active electronically scanned antenna array extends signal range at millimeter-wave frequencies, steering thousands of beams per second to detect incoming kinetic threats while enabling covert communications between vehicles. WiSPR is engineered to have a low probability of detection, helping U.S. Army units evade adversaries seeking to detect radio-frequency (RF) energy emitting from radars. The system is currently in production.
"Current global conflicts are highlighting the susceptibility of armored vehicles to adversary anti-tank weapons. By combining custom technologies and commercial off-the-shelf hardware, the Lincoln Laboratory team produced a WiSPR prototype as quickly and efficiently as possible," says program manager Christopher Serino, who oversaw WiSPR development with principal investigator David Conway.
Advancing computing
Bumpless Integration of Chiplets to Al-Optimized Fabric is an approach that enables the fabrication of next-generation 2D, 2.5D, and 3D integrated circuits. As data-processing demands increase, designers are exploring 3D stacked assemblies of small specialized chips (chiplets) to pack more power into devices. Tiny bumps of conductive material are used to electrically connect these stacks, but these microbumps cannot accommodate the extremely dense, massively interconnected components needed for future microcomputers. To address this issue, Lincoln Laboratory developed a technique eliminating microbumps. Key to this technique is a lithographically produced fabric allowing electrical bonding of chiplet stack layers. Researchers used an AI-driven decision-tree approach to optimize the design of this fabric. This bumpless feature can integrate hundreds of chiplets that perform like a single chip, improving data-processing speed and power efficiency, especially for high-performance AI applications.
"Our novel, bumpless, heterogeneous chiplet integration is a transformative approach addressing two semiconductor industry challenges: expanding chip yield and reducing cost and time to develop systems," says principal investigator Rabindra Das.
Quantum Diamond Magnetic Cryomicroscope is a breakthrough in magnetic field imaging for characterizing superconducting electronics, a promising frontier in high-performance computing. Unlike traditional techniques, this system delivers fast, wide-field, high-resolution imaging at the cryogenic temperatures required for superconducting devices. The instrument combines an optical microscopy system with a cryogenic sensor head containing a diamond engineered with nitrogen-vacancy centers — atomic-scale defects highly sensitive to magnetic fields. The cryomicroscope enables researchers to directly visualize trapped magnetic vortices that interfere with critical circuit components, helping to overcome a major obstacle to scaling superconducting electronics.
“The cryomicroscope gives us an unprecedented window into magnetic behavior in superconducting devices, accelerating progress toward next-generation computing technologies,” says Pauli Kehayias, joint principal investigator with Jennifer Schloss. The instrument is currently advancing superconducting electronics development at Lincoln Laboratory and is poised to impact materials science and quantum technology more broadly.
Enhancing communications
Lincoln Laboratory Radio Frequency Situational Awareness Model (LL RF-SAM) utilizes advances in AI to enhance U.S. service members' vigilance over the electromagnetic spectrum. The modern spectrum can be described as a swamp of mixed signals originating from civilian, military, or enemy sources. In near-real time, LL RF-SAM inspects these signals to disentangle and identify nearby waveforms and their originating devices. For example, LL RF-SAM can help a user identify a particular packet of energy as a drone transmission protocol and then classify whether that drone is part of a corpus of friendly or enemy drones.
"This type of enhanced context helps military operators make data-driven decisions. The future adoption of this technology will have profound impact across communications, signals intelligence, spectrum management, and wireless infrastructure security," says principal investigator Joey Botero.
Modular, Agile, Scalable Optical Terminal (MAScOT) is a laser communications (lasercom) terminal architecture that facilitates mission-enabling lasercom solutions adaptable to various space platforms and operating environments. Lasercom is rapidly becoming the go-to technology for space-to-space links in low Earth orbit because of its ability to support significantly higher data rates compared to radio frequency terminals. However, it has yet to be used operationally or commercially for longer-range space-to-ground links, as such systems often require custom designs for specific missions. MASCOT's modular, agile, and scalable design streamlines the process for building lasercom terminals suitable for a range of missions, from near Earth to deep space. MAScOT made its debut on the International Space Station in 2023 to demonstrate NASA's first two-way lasercom relay system, and is now being prepared to serve in an operational capacity on Artemis II, NASA's moon flyby mission scheduled for 2026. Two industry-built terminals have adopted the MAScOT architecture, and technology transfer to additional industry partners is ongoing.
"MAScOT is the latest lasercom terminal designed by Lincoln Laboratory engineers following decades of pioneering lasercom work with NASA, and it is poised to support lasercom for decades to come," says Bryan Robinson, who co-led MAScOT development with Tina Shih.
Protected Anti-jam Tactical SATCOM (PATS) Key Management System (KMS) Prototype addresses the critical challenge of securely distributing cryptographic keys for military satellite communications (SATCOM) during terminal jamming, compromise, or disconnection. Realizing the U.S. Space Systems Command's vision for resilient, protected tactical SATCOM, the PATS KMS Prototype leverages innovative, bandwidth-efficient protocols and algorithms to enable real-time, scalable key distribution over wireless links, even under attack, so that warfighters can communicate securely in contested environments. PATS KMS is now being adopted as the core of the Department of Defense's next-generation SATCOM architecture.
"PATS KMS is not just a technology — it's a linchpin enabler of resilient, modern SATCOM, built for the realities of today's contested battlefield. We worked hand-in-hand with government stakeholders, operational users, and industry partners across a multiyear, multiphase journey to bring this capability to life," says Joseph Sobchuk, co-principal investigator with Nancy List. The R&D 100 Award is shared with the U.S. Space Force Space Systems Command, whose “visionary leadership has been instrumental in shaping the future of protected tactical SATCOM,” Sobchuk adds.
The MAScOT laser communications terminal is installed on the exterior of the Artemis II Orion spacecraft, expected to launch in 2026 for a mission to the moon.
Healthy Minds Survey points to encouraging findings, areas for renewed focus
Survey assessed student mental health, sense of belonging, and utilization of services and resources on campus
Julie McDonough
Harvard Correspondent
long read
Findings released Tuesday show Harvard students scored better than the national average on measures related to mental health, belonging on campus, and awareness and utilization of resources and support services. But University administrators say there are opportunities to increase awareness of specific mental health resources and build stronger connections among students on campus.
The data, gathered through the Healthy Minds Survey, analyzed feedback on a variety of measures including anxiety, depression, disordered eating, suicidality, and binge drinking. It also measured students’ sense of belonging on campus, exploring issues of loneliness and isolation. The third sector of data collected measured students’ awareness of resources on campus and identified barriers to access.
The Healthy Minds Survey was conducted at Harvard in spring 2025. With a response rate of 25 percent, more than 5,900 students across undergraduate and graduate Schools completed the 25-minute survey. Healthy Minds, a national initiative launched in 2007 by the University of Michigan, administered the survey, which is designed to gather information about mental health on college campuses. Since its inception, more than 850,000 students at 600 colleges and universities have participated, including Stanford University, MIT, Tufts University, and Boston University.
A joint initiative of the Office of the Associate Provost for Student Affairs and Harvard University Health Services, the Healthy Minds survey at Harvard stems from recommendations put forth by the 2020 Report of the Task Force onManaging Student Mental Health.To learn more about the survey data, the progress made since 2020, and next steps, the Gazette sat down with Robin Glover, associate provost for student affairs, Giang Nguyen, associate provost for campus health and wellbeing and executive director of Harvard University Health Services (HUHS), and Barbara Lewis, senior director of student mental health and chief of Counseling and Mental Health Services (CAMHS).
Why did the University conduct the Healthy Minds survey last spring?
Glover: One of the recommendations from the Task Force on Managing Student Mental Health was for the University to collect data regularly on student mental health. We had done smaller surveys, but we did not have a comprehensive, University-wide mental health survey. In addition, it had been a few years since we implemented the recommendations from the task force, and we wanted to see how we were doing.
Nguyen: We also wanted to have some sense of where we stood in relation to college and university students across the country. Healthy Minds provided us with the perfect vehicle. It was a well-established survey that had been done with hundreds of thousands of students across the country for many years. It was a good platform to understand the Harvard student experience and to compare that to students at other higher education institutions. We are very grateful to all of the students who took the time to complete the survey. The feedback is invaluable as we consider next steps for services and resources on campus.
The findings are grouped by mental health, belonging, and utilization of care and services. What was the most interesting or noteworthy finding from each group?
Nguyen: With regard to mental health, in general, Harvard students’ reported state of mental health is better than what we are seeing nationally. On the emotional flourishing scale, for example, 47 percent of Harvard students scored in the flourishing range versus 38 percent in the national sample. In addition, while not immune to anxiety or depression, Harvard students are experiencing it at a lower rate than their peers nationally (22 percent versus 36 percent nationally for depression, 23 percent versus 32 percent for anxiety). With that said, we would love to see higher rates of flourishing and lower rates of anxiety or depression.
Glover: With regard to belonging, the data was very positive. Eighty-one percent strongly agree or agree that they fit in at Harvard and 83 percent see themselves as part of the community. This was a reassuring finding because we know that connecting with others in our campus community can be a challenge for some students. Though we did have 45 percent report they felt like they were isolated from campus life and 68 percent felt that others know more about what is happening on campus. There seems to be an opportunity here for everyone in our community to share information with each other about educational, social, or wellness events and activities on campus.
Lewis: With regard to utilization of services on campus, 89 percent of students indicated knowledge of mental health care and services available to them. This was really gratifying to see as we have been working hard over the past few years to raise awareness of the resources we offer and increase access to services in a timely manner. We were surprised that financial considerations arose as a barrier to access. Many of the services and support we offer to students are actually free of charge, so we feel we have some education to do around that piece.
Was there anything that surprised you about the data overall?
Glover: While students overall were aware in general of the mental health services offered, they were not as familiar with the specific services and indicated that they had not been made aware within the past academic year. This suggests to us that perhaps we need to be more intentional about identifying the specific services we offer. We also need to be sure that we remind students every year about the services offered and reach out to them through many different channels. Students turn over every year and we want to make sure we keep up a steady drumbeat of outreach and information so students know what we offer and how they can get help.
Binge drinking was one of the areas where Harvard reported higher rates than the national data. Any thoughts on why or what the University might do to improve?
Lewis: Binge drinking was a bit higher than the national average. Twenty-nine percent of Harvard students reported binge drinking versus 26 percent in the national average. This suggests we have work to do in this area. It may start with partnering with students to better understand this data and how we might address it. Our Center forWellness and Health Promotion offers 1:1 sessions for students to discuss their relationship with alcohol or other substances. This may be a resource we should highlight more frequently.
Does the data show progress against the recommendations that came from the 2020 Task Force on Managing Student Health?
Glover: One of our goals for the Implementation Committee of the Managing Student Mental Health Task Force was to increase awareness and access to services and resources on campus. The data tell us that the vast majority of students are aware of the mental health resources offered and are accessing them. This awareness has been helped by our website — www.harvard.edu/wellbeing that created a one-stop-shop for students. Before, we had a very decentralized model, and it was hard for students to know how to find the support they needed.
Lewis: On the topic of access, our utilization of care has risen, with initial consults for mental health care at Counseling and Mental Health Services increasing 14 percent per year over the past two years. Overall, 79 percent of students are satisfied with the services that they receive. We have also dramatically shortened wait times to access services by putting in place our Clinical Access Team, a team of licensed clinicians devoted to initial consultations, who then refer students to appropriate resources including CAMHS, TimelyCare, or a professional in the community. As a result, wait times for non-urgent needs have decreased significantly. These are all improvements we have made that data show are making a difference.
How do you interpret the imposter syndrome finding for Harvard students?
Glover: The imposter syndrome data was a bit surprising, though not totally unexpected. These were questions about whether students compare their abilities to others and if they are afraid of being exposed as less capable than their peers. Approximately six in 10 Harvard students often compare themselves to their peers and think others might be more intelligent. Fifty-two percent of students fear that others will discover how much knowledge or ability they lack.
Lewis: Imposter syndrome can lead to feelings of isolation, so it is important to address it and make sure all students know that they belong here. Some things to consider include openly discussing imposter syndrome, providing workshops, celebrating students’ accomplishments, encouraging mentorship, and reframing success to include effort and growth.
Tell me more about the finding on isolation and loneliness. Do you see this as part of a larger trend?
Nguyen: Yes, 45 percent of Harvard students reported feeling isolated on campus. It’s not surprising to see this data for Harvard students and of course it’s not just Harvard. There is a national epidemic of isolation and loneliness. The question is — what do we do about it? How can we help students not feel isolated and alone? Because we are in a university setting, we have a unique opportunity to build patterns of belonging and promote connection. This is an area we will want to explore moving forward.
Why do you think financial factors rose to the top as a barrier to accessing services? Do you think this is something that can be addressed?
Lewis: This is data that tells us that we have some education to do because many of the mental health services offered at Harvard are free of charge to students. Students can access Counseling and Mental Health Services and TimelyCare for short-term mental health care at no charge to them. At Harvard, financial concerns should not be a barrier to initiating care.
If a student does need longer-term care, they may need to use their health insurance plan. We know insurance can be confusing to students, but the CAMHS Clinical Access Team can help students navigate their insurance. This is a service that is extremely valuable, but not many students are aware of it. Appointments can be made online as well so it is a service that is easy to access.
Given this data, what will be areas of focus moving forward?
Glover: We need to continue our outreach to remind students of the services that are offered and to emphasize that they can get the care that they need in a timely manner. We may need to do some work about emphasizing specific services and consider different forms of outreach. We will also look at the specific topics of isolation, imposter syndrome, and binge drinking, as we mentioned previously, since our data showed room for improvement in those areas.
Where can students find out more about the mental health and support services offered on campus?
Nguyen: A good place to start is our wellbeing website at www.harvard.edu/wellbeing or our CAMHS website. Students (and the faculty or staff who support them) can also access School-specific resources through the Crimson folder for their School. It is always a good idea to reach out to Student Affairs staff at the College or your School as they have knowledge of all the resources available to students at Harvard.
Your body fought off the virus — but damaged your lungs
Researchers zero in on potential key to rapidly repairing tissue harmed by inflammation
Clea Simon
Harvard Correspondent
4 min read
Daisy Hoagland (left) and Ruth Franklin.
Photos by Niles Singer/Harvard Staff Photographer
When the lungs are attacked by a virus, the damage doesn’t stop there. The body’s natural defenses cause inflammation while fighting the virus, often leaving lasting problems. The cells that make up the lungs’ mucosal lining are exposed to the environment with every breath — both highlighting the risk of infection and emphasizing the need for a robust response. In a paper published recently in Science, a team of Harvard researchers reveals how macrophages — a type of immune cell — may be key to repairing that damage, both from the initial infection and from the immune response itself.
“Nearly every tissue in the body contains resident macrophages,” explained Ruth Franklin, assistant professor of stem cell and regenerative biology, in whose lab the research was conducted. These multifunctional cells “help to maintain function of tissues.”
The problems arise when the lungs, which regularly encounter viruses and bacteria, become infected. The organ’s first priority is to fight the infection. However, the body’s reaction can damage the epithelial cells that line organs and form their protective barriers.
“What we found is that macrophages in the lung produce a growth factor, oncostatin M (or OSM), that is able to quickly restore the epithelial barrier in the lung,” Franklin said. “This rapid repair is extremely important because in the lung, you’re more vulnerable to the outside environment if you don’t have that barrier.”
“What we found is that macrophages in the lung produce a growth factor, oncostatin M (or OSM), that is able to quickly restore the epithelial barrier in the lung.”
Ruth Franklin
Daisy Hoagland, a postdoctoral researcher in Franklin’s lab and co-first author of the paper, elaborated: “There are different kinds of viruses, but some viruses go into a cell, hijack all of its machinery, and cause the cell to rupture. Also, the virus can cause the cell to self-destruct intentionally.
“A lot of the damage will happen because the immune system is trying to kill the infected cells,” she added. “It’s really hard to repair the epithelial barrier during inflammation because a lot of the inflammatory signals prevent cells from replicating and redirect cells toward defense programs instead of regeneration.”
To test whether OSM was important for repairing the epithelial barrier, the team studied mice that had been bioengineered to prevent production of OSM and infected them with the influenza virus. “By nearly every metric that we checked, mice lacking OSM had more damage than normal mice,” Hoagland said.
The next step involved using a synthetic virus-like molecule, poly(I:C), which doesn’t replicate like a real virus but nonetheless triggers an immune response. Importantly, it activates the same inflammatory signals that usually prevent cells from dividing. Using this on both the OSM-deficient mice and normal mice led to the same results. The conclusion? OSM is essential to helping the lung’s protective lining heal during the immune system’s antiviral response.
“Cells die in response to both viral infection and from the inflammation it causes,” said Franklin. “Repairing this damage is difficult while an infection is ongoing, but OSM can override some of these signals and restore the barrier.”
“A lot of the time people who die on ventilators from diseases like COVID-19 or severe viral pneumonia have actually cleared the virus, but they can’t fix their lungs in the context of all of this inflammation.”
Daisy Hoagland
This latest discovery follows years of studies of OSM that began around 2014, noted Franklin — “before I came to Harvard.” It also opens new paths of exploration. For example, said Hoagland, researchers do not yet know what OSM does when there is no infection.
“What we found is that OSM is produced at low levels in the absence of inflammation,” she said. “We’re currently trying to understand the role of OSM at baseline.”
In the meantime, the team’s research continues.
“Right now, we’re trying to see if there’s any therapeutic potential for OSM,” Hoagland said. Although the experimentation continues in mice, the goal is to see whether OSM could help repair human lungs damaged by illness.
“A lot of the time people who die on ventilators from diseases like COVID-19 or severe viral pneumonia have actually cleared the virus, but they can’t fix their lungs in the context of all of this inflammation,” Hoagland said. “We’re really hopeful that OSM could potentially be therapeutically beneficial in those situations.”
This work was funded in part by the National Institutes of Health and the National Science Foundation.
When cells are healthy, we don’t expect them to suddenly change cell types. A skin cell on your hand won’t naturally morph into a brain cell, and vice versa. That’s thanks to epigenetic memory, which enables the expression of various genes to “lock in” throughout a cell’s lifetime. Failure of this memory can lead to diseases, such as cancer.
Traditionally, scientists have thought that epigenetic memory locks genes either “on” or “off” — either fully activated or fully repressed, like a permanent Lite-Brite pattern. But MIT engineers have found that the picture has many more shades.
In a new study appearing today in Cell Genomics, the team reports that a cell’s memory is set not by on/off switching but through a more graded, dimmer-like dial of gene expression.
The researchers carried out experiments in which they set the expression of a single gene at different levels in different cells. While conventional wisdom would assume the gene should eventually switch on or off, the researchers found that the gene’s original expression persisted: Cells whose gene expression was set along a spectrum between on and off remained in this in-between state.
The results suggest that epigenetic memory — the process by which cells retain gene expression and “remember” their identity — is not binary but instead analog, which allows for a spectrum of gene expression and associated cell identities.
“Our finding opens the possibility that cells commit to their final identity by locking genes at specific levels of gene expression instead of just on and off,” says study author Domitilla Del Vecchio, professor of mechanical and biological engineering at MIT. “The consequence is that there may be many more cell types in our body than we know and recognize today, that may have important functions and could underlie healthy or diseased states.”
The study’s MIT lead authors are Sebastian Palacios and Simone Bruno, with additional co-authors.
Beyond binary
Every cell shares the same genome, which can be thought of as the starting ingredient for life. As a cell takes shape, it differentiates into one type or another, through the expression of genes in its genome. Some genes are activated, while others are repressed. The combination steers a cell toward one identity versus another.
A process of DNA methylation, by which certain molecules attach to the genes’ DNA, helps lock their expression in place. DNA methylation assists a cell to “remember” its unique pattern of gene expression, which ultimately establishes the cell’s identity.
Del Vecchio’s group at MIT applies mathematics and genetic engineering to understand cellular molecular processes and to engineer cells with new capabilities. In previous work, her group was experimenting with DNA methylation and ways to lock the expression of certain genes in ovarian cells.
“The textbook understanding was that DNA methylation had a role to lock genes in either an on or off state,” Del Vecchio says. “We thought this was the dogma. But then we started seeing results that were not consistent with that.”
While many of the cells in their experiment exhibited an all-or-nothing expression of genes, a significant number of cells appeared to freeze genes in an in-between state — neither entirely on or off.
“We found there was a spectrum of cells that expressed any level between on and off,” Palacios says. “And we thought, how is this possible?”
Shades of blue
In their new study, the team aimed to see whether the in-between gene expression they observed was a fluke or a more established property of cells that until now has gone unnoticed.
“It could be that scientists disregarded cells that don’t have a clear commitment, because they assumed this was a transient state,” Del Vecchio says. “But actually these in-between cell types may be permanent states that could have important functions.”
To test their idea, the researchers ran experiments with hamster ovarian cells — a line of cells commonly used in the laboratory. In each cell, an engineered gene was initially set to a different level of expression. The gene was turned fully on in some cells, completely off in others, and set somewhere in between on and off for the remaining cells.
The team paired the engineered gene with a fluorescent marker that lights up with a brightness corresponding to the gene’s level of expression. The researchers introduced, for a short time, an enzyme that triggers the gene’s DNA methylation, a natural gene-locking mechanism. They then monitored the cells over five months to see whether the modification would lock the genes in place at their in-between expression levels, or whether the genes would migrate toward fully on or off states before locking in.
“Our fluorescent marker is blue, and we see cells glow across the entire spectrum, from really shiny blue, to dimmer and dimmer, to no blue at all,” Del Vecchio says. “Every intensity level is maintained over time, which means gene expression is graded, or analog, and not binary. We were very surprised, because we thought after such a long time, the gene would veer off, to be either fully on or off, but it did not.”
The findings open new avenues into engineering more complex artificial tissues and organs by tuning the expression of certain genes in a cell’s genome, like a dial on a radio, rather than a switch. The results also complicate the picture of how a cell’s epigenetic memory works to establish its identity. It opens up the possibility that cell modifications such as those exhibited in therapy-resistant tumors could be treated in a more precise fashion.
“Del Vecchio and colleagues have beautifully shown how analog memory arises through chemical modifications to the DNA itself,” says Michael Elowitz, professor of biology and biological engineering at the California institute of Technology, who was not involved in the study. “As a result, we can now imagine repurposing this natural analog memory mechanism, invented by evolution, in the field of synthetic biology, where it could help allow us to program permanent and precise multicellular behaviors.”
“One of the things that enables the complexity in humans is epigenetic memory,” Palacios says. “And we find that it is not what we thought. For me, that’s actually mind-blowing. And I think we’re going to find that this analog memory is relevant for many different processes across biology.”
This research was supported, in part, by the National Science Foundation, MODULUS, and a Vannevar Bush Faculty Fellowship through the U.S. Office of Naval Research.
Traditionally, scientists have thought that epigenetic memory locks genes either “on” or “off” — either fully activated or fully repressed. But MIT engineers have found that a cell’s memory is set not only by on/off switching but also through a more graded, dimmer-like dial of gene expression.
New AI tool predicts therapies to restore health in diseased cells
Ekaterina Pesheva
HMS Communications
6 min read
Currently using model to tackle Parkinson’s, Alzheimer’s
In a move that could reshape drug discovery, researchers at Harvard Medical School have designed an artificial intelligence model capable of identifying treatments that reverse disease states in cells.
Unlike traditional approaches that typically test one protein target or drug at a time in hopes of identifying an effective treatment, the new model, called PDGrapher and available for free, focuses on multiple drivers of disease and identifies the genes most likely to revert diseased cells back to healthy function.
The tool also identifies the best single or combined targets for treatments that correct the disease process. The work, described Tuesday in Nature Biomedical Engineering, was supported in part by federal funding.
By zeroing in on the targets most likely to reverse disease, the new approach could speed up drug discovery and design and unlock therapies for conditions that have long eluded traditional methods, the researchers noted.
“Traditional drug discovery resembles tasting hundreds of prepared dishes to find one that happens to taste perfect,” said study senior author Marinka Zitnik, associate professor of biomedical informatics in the Blavatnik Institute at HMS. “PDGrapher works like a master chef who understands what they want the dish to be and exactly how to combine ingredients to achieve the desired flavor.”
The new approach could speed up drug discovery and design and unlock therapies for conditions that have long eluded traditional methods.
The traditional drug-discovery approach — which focuses on activating or inhibiting a single protein — has succeeded with treatments such as kinase inhibitors, drugs that block certain proteins used by cancer cells to grow and divide. However, Zitnik noted, this discovery paradigm can fall short when diseases are fueled by the interplay of multiple signaling pathways and genes. For example, many breakthrough drugs discovered in recent decades — think immune checkpoint inhibitors and CAR T-cell therapies — work by targeting disease processes in cells.
The approach enabled by PDGrapher, Zitnik said, looks at the bigger picture to find compounds that can actually reverse signs of disease in cells, even if scientists don’t yet know exactly which molecules those compounds may be acting on.
How PDGrapher works: Mapping complex linkages and effects
PDGrapher is a type of artificial intelligence tool called a graph neural network. This tool doesn’t just look at individual data points but at the connections that exist between these data points and the effects they have on one another.
In the context of biology and drug discovery, this approach is used to map the relationship between various genes, proteins, and signaling pathways inside cells and predict the best combination of therapies that would correct the underlying dysfunction of a cell to restore healthy cell behavior. Instead of exhaustively testing compounds from large drug databases, the new model focuses on drug combinations that are most likely to reverse disease.
PDGrapher points to parts of the cell that might be driving disease. Next, it simulates what happens if these cellular parts were turned off or dialed down. The AI model then offers an answer as to whether a diseased cell would happen if certain targets were “hit.”
“Instead of testing every possible recipe, PDGrapher asks: ‘Which mix of ingredients will turn this bland or overly salty dish into a perfectly balanced meal?’” Zitnik said.
Advantages of the new model
The researchers trained the tool on a dataset of diseased cells before and after treatment so that it could figure out which genes to target to shift cells from a diseased state to a healthy one.
Next, they tested it on 19 datasets spanning 11 types of cancer, using both genetic and drug-based experiments, asking the tool to predict various treatment options for cell samples it had not seen before and for cancer types it had not encountered.
The tool accurately predicted drug targets already known to work but that were deliberately excluded during training to ensure the model did not simply recall the right answers. It also identified additional candidates supported by emerging evidence. The model also highlighted KDR (VEGFR2) as a target for non-small cell lung cancer, aligning with clinical evidence. It also identified TOP2A — an enzyme already targeted by approved chemotherapies — as a treatment target in certain tumors, adding to evidence from recent preclinical studies that TOP2A inhibition may be used to curb the spread of metastases in non-small cell lung cancer.
The model showed superior accuracy and efficiency, compared with other similar tools. In previously unseen datasets, it ranked the correct therapeutic targets up to 35 percent higher than other models did and delivered results up to 25 times faster than comparable AI approaches.
What this AI advance spells for the future of medicine
The new approach could optimize the way new drugs are designed, the researchers said. This is because instead of trying to predict how every possible change would affect a cell and then looking for a useful drug, PDGrapher right away seeks which specific targets can reverse a disease trait. This makes it faster to test ideas and lets researchers focus on fewer promising targets.
“Our ultimate goal is to create a clear road map of possible ways to reverse disease at the cellular level.”
Marinka Zitnik, Blavatnik Institute
This tool could be especially useful for complex diseases fueled by multiple pathways, such as cancer, in which tumors can outsmart drugs that hit just one target. Because PDGrapher identifies multiple targets involved in a disease, it could help circumvent this problem.
Additionally, the researchers said that after careful testing to validate the model, it could one day be used to analyze a patient’s cellular profile and help design individualized treatment combinations.
Finally, because PDGrapher identifies cause-effect biological drivers of disease, it could help researchers understand why certain drug combinations work — offering new biological insights that could propel biomedical discovery even further.
The team is currently using this model to tackle brain diseases such as Parkinson’s and Alzheimer’s, looking at how cells behave in disease and spotting genes that could help restore them to health. The researchers are also collaborating with colleagues at the Center for XDP at Massachusetts General Hospital to identify new drug targets and map which genes or pairs of genes could be affected by treatments for X-linked Dystonia-Parkinsonism, a rare inherited neurodegenerative disorder.
“Our ultimate goal is to create a clear road map of possible ways to reverse disease at the cellular level,” Zitnik said.
The work was funded in part by federal grants from the National Institutes of Health, National Science Foundation CAREER Program, the U.S. Department of Defense, and the ARPA-H Biomedical Data Fabric program, as well as awards from the Chan Zuckerberg Initiative, the Gates Foundation, Amazon Faculty Research, Google Research Scholar Program, AstraZeneca Research, Roche Alliance with Distinguished Scientists, Sanofi iDEA-iTECH, Pfizer Research, John and Virginia Kaneb Fellowship at HMS, Biswas Computational Biology Initiative in partnership with the Milken Institute, HMS Dean’s Innovation Awards for the Use of Artificial Intelligence, Harvard Data Science Initiative, and the Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University. Partial support was received from the Summer Institute in Biomedical Informatics at HMS and from the ERC-Consolidator Grant.
Using tiny particles shaped like bottlebrushes, MIT chemists have found a way to deliver a large range of chemotherapy drugs directly to tumor cells.
To guide them to the right location, each particle contains an antibody that targets a specific tumor protein. This antibody is tethered to bottlebrush-shaped polymer chains carrying dozens or hundreds of drug molecules — a much larger payload than can be delivered by any existing antibody-drug conjugates.
In mouse models of breast and ovarian cancer, the researchers found that treatment with these conjugated particles could eliminate most tumors. In the future, the particles could be modified to target other types of cancer, by swapping in different antibodies.
“We are excited about the potential to open up a new landscape of payloads and payload combinations with this technology, that could ultimately provide more effective therapies for cancer patients,” says Jeremiah Johnson, the A. Thomas Geurtin Professor of Chemistry at MIT, a member of the Koch Institute for Integrative Cancer Research, and the senior author of the new study.
MIT postdoc Bin Liu is the lead author of the paper, which appears today in Nature Biotechnology.
A bigger drug payload
Antibody-drug conjugates (ADCs) are a promising type of cancer treatment that consist of a cancer-targeting antibody attached to a chemotherapy drug. At least 15 ADCs have been approved by the FDA to treat several different types of cancer.
This approach allows specific targeting of a cancer drug to a tumor, which helps to prevent some of the side effects that occur when chemotherapy drugs are given intravenously. However, one drawback to currently approved ADCs is that only a handful of drug molecules can be attached to each antibody. That means they can only be used with very potent drugs — usually DNA-damaging agents or drugs that interfere with cell division.
To try to use a broader range of drugs, which are often less potent, Johnson and his colleagues decided to adapt bottlebrush particles that they had previously invented. These particles consist of a polymer backbone that are attached to tens to hundreds of “prodrug” molecules — inactive drug molecules that are activated upon release within the body. This structure allows the particles to deliver a wide range of drug molecules, and the particles can be designed to carry multiple drugs in specific ratios.
Using a technique called click chemistry, the researchers showed that they could attach one, two, or three of their bottlebrush polymers to a single tumor-targeting antibody, creating an antibody-bottlebrush conjugate (ABC). This means that just one antibody can carry hundreds of prodrug molecules. The currently approved ADCs can carry a maximum of about eight drug molecules.
The huge number of payloads in the ABC particles allows the researchers to incorporate less potent cancer drugs such as doxorubicin or paclitaxel, which enhances the customizability of the particles and the variety of drug combinations that can be used.
“We can use antibody-bottlebrush conjugates to increase the drug loading, and in that case, we can use less potent drugs,” Liu says. “In the future, we can very easily copolymerize with multiple drugs together to achieve combination therapy.”
The prodrug molecules are attached to the polymer backbone by cleavable linkers. After the particles reach a tumor site, some of these linkers are broken right away, allowing the drugs to kill nearby cancer cells even if they don’t express the target antibody. Other particles are absorbed into cells with the target antibody before releasing their toxic payload.
Effective treatment
For this study, the researchers created ABC particles carrying a few different types of drugs: microtubule inhibitors called MMAE and paclitaxel, and two DNA-damaging agents, doxorubicin and SN-38. They also designed ABC particles carrying an experimental type of drug known as PROTAC (proteolysis-targeting chimera), which can selectively degrade disease-causing proteins inside cells.
Each bottlebrush was tethered to an antibody targeting either HER2, a protein often overexpressed in breast cancer, or MUC1, which is commonly found in ovarian, lung, and other types of cancer.
The researchers tested each of the ABCs in mouse models of breast or ovarian cancer and found that in most cases, the ABC particles were able to eradicate the tumors. This treatment was significantly more effective than giving the same bottlebrush prodrugs by injection, without being conjugated to a targeting antibody.
“We used a very low dose, almost 100 times lower compared to the traditional small-molecule drug, and the ABC still can achieve much better efficacy compared to the small-molecule drug given on its own,” Liu says.
These ABCs also performed better than two FDA-approved ADCs, T-DXd and TDM-1, which both use HER2 to target cells. T-DXd carries deruxtecan, which interferes with DNA replication, and TDM-1 carries emtansine, a microtubule inhibitor.
In future work, the MIT team plans to try delivering combinations of drugs that work by different mechanisms, which could enhance their overall effectiveness. Among these could be immunotherapy drugs such as STING activators.
The researchers are also working on swapping in different antibodies, such as antibodies targeting EGFR, which is widely expressed in many tumors. More than 100 antibodies have been approved to treat cancer and other diseases, and in theory any of those could be conjugated to cancer drugs to create a targeted therapy.
The research was funded in part by the National Institutes of Health, the Ludwig Center at MIT, and the Koch Institute Frontier Research Program.
MIT researchers have shown that chemotherapy drugs carried by a bottlebrush polymer (green and blue molecule) can be attached to an antibody, which guides the molecule to a tumor. This approach could avoid many of the side effects of systemic chemotherapy delivery.
The £42.8 million Generation New Era birth cohort study will create a comprehensive picture of early childhood development in all four nations of the UK.
Funded by the UKRI Economic and Social Research Council (ESRC), this is the first new UK-wide longitudinal birth cohort study in 25 years and comes as the government publishes its Giving every child the best start in life policy paper.
Generation New Era will collect data at two key developmental stages – between 9-11 months and again at 3-4 years – providing crucial insights before children enter formal education. The research will examine physical, mental and social development, and explore how technological, environmental and social changes affect early childhood experiences. The intention is that the initiative will track these children and their families throughout their lives.
Generation New Era will be led jointly by Co-Directors Professor Pasco Fearon of the University of Cambridge and Professors Alissa Goodman and Lisa Calderwood of UCL.
Professor Fearon, Director of the Centre for Child, Adolescent and Family Research at Cambridge, said: “Children’s lives have changed dramatically since the last UK birth cohort study was launched at the turn of the century. In the past decade, unprecedented social, technological, political and economic events have taken place that have changed the landscape for families raising children dramatically.
“New UK-wide data are needed urgently to help us understand how these changes impact children as they grow up, and there will be new opportunities and challenges for families coming down the line, like AI, that a study like this can help us to better understand.”
As a four-nations cohort study, the study team will benefit from the expertise of senior academics based at the universities of Swansea, Ulster, and Edinburgh, who will serve as the study's leads in their countries.
It will invite over 60,000 children and their families from across the UK with the aim of recruiting 30,000 to participate in the project. There will be a particular focus on recruiting fathers as well as mothers and including groups previously underrepresented in population research, giving a voice to as many communities in UK society as possible
This comprehensive approach will ensure the findings are representative of the diverse experiences of families across the country and that comparisons can be made to help all areas of the UK to learn what works best to improve lives and livelihoods.
The findings generated by the study will directly inform policy development across government departments, helping to ensure services and support for families are based on robust evidence.
Professor Alissa Goodman from the UCL Centre for Longitudinal Studies said: “Generation New Era is a landmark scientific endeavour which will improve the lives of children and benefit science and society for many years to come.
“As the government works to give every child the best start in life, the study can help shape vital policies and services for babies and parents across the UK. Thanks to the commitment of our participants, we can support the health and development of this generation - and help future generations thrive.”
Generation New Era is part of a long tradition of research council-funded UK longitudinal birth cohort studies which have followed the lives of tens of thousands of people over the past eight decades.
ESRC executive chair Stian Westlake said: “I am excited to see what Generation New Era will discover about the lives of children born next year and how they differ across the UK. The evidence this study produces can underpin policy that makes the UK a happier, healthier and fairer place, improving lives and livelihoods. It is an investment in the future that we are proud to make.”
The study will begin inviting families to take part in the study from summer 2026.
Adapted from a press release from the ESRC
Cambridge is to co-lead a new UK-wide scientific study that will follow the lives of 30,000 children born in 2026, helping provide evidence to improve the lives of future generations.
In the past decade, unprecedented social, technological, political and economic events have taken place that have changed the landscape for families raising children dramatically
The Whitehead Institute for Biomedical Research fondly remembers its founding director, David Baltimore, a former MIT Institute Professor and Nobel laureate who died Sept. 6 at age 87.
With discovery after discovery, Baltimore brought to light key features of biology with direct implications for human health. His work at MIT earned him a share of the 1975 Nobel Prize in Physiology or Medicine (along with Howard Temin and Renato Dulbecco) for discovering reverse transcriptase and identifying retroviruses, which use RNA to synthesize viral DNA.
Following the award, Baltimore reoriented his laboratory’s focus to pursue a mix of immunology and virology. Among the lab’s most significant subsequent discoveries were the identification of a pair of proteins that play an essential role in enabling the immune system to create antibodies for so many different molecules, and investigations into how certain viruses can cause cell transformation and cancer. Work from Baltimore’s lab also helped lead to the development of the important cancer drug Gleevec — the first small molecule to target an oncoprotein inside of cells.
In 1982, Baltimore partnered with philanthropist Edwin C. “Jack” Whitehead to conceive and launch the Whitehead Institute and then served as its founding director until 1990. Within a decade of its founding, the Baltimore-led Whitehead Institute was named the world’s top research institution in molecular biology and genetics.
“More than 40 years later, Whitehead Institute is thriving, still guided by the strategic vision that David Baltimore and Jack Whitehead articulated,” says Phillip Sharp, MIT Institute Professor Emeritus, former Whitehead board member, and fellow Nobel laureate. “Of all David’s myriad and significant contributions to science, his role in building the first independent biomedical research institute associated with MIT and guiding it to extraordinary success may well prove to have had the broadest and longest-term impact.”
Ruth Lehmann, director and president of the Whitehead Institute, and professor of biology at MIT, says: “I, like many others, owe my career to David Baltimore. He recruited me to Whitehead Institute and MIT in 1988 as a faculty member, taking a risk on an unproven, freshly-minted PhD graduate from Germany. As director, David was incredibly skilled at bringing together talented scientists at different stages of their careers and facilitating their collaboration so that the whole would be greater than the sum of its parts. This approach remains a core strength of Whitehead Institute.”
As part of the Whitehead Institute’s mission to cultivate the next generation of scientific leaders, Baltimore founded the Whitehead Fellows program, which provides extraordinarily talented recent PhD and MD graduates with the opportunity to launch their own labs, rather than to go into traditional postdoctoral positions. The program has been a huge success, with former fellows going on to excel as leaders in research, education, and industry.
David Page, MIT professor of biology, Whitehead Institute member, and former director who was the Whitehead's first fellow, recalls, “David was both an amazing scientist and a peerless leader of aspiring scientists. The launching of the Whitehead Fellows program reflected his recipe for institutional success: gather up the resources to allow young scientists to realize their dreams, recruit with an eye toward potential for outsized impact, and quietly mentor and support without taking credit for others’ successes — all while treating junior colleagues as equals. It is a beautiful strategy that David designed and executed magnificently.”
Sally Kornbluth, president of MIT and a member of the Whitehead Institute Board of Directors, says that “David was a scientific hero for so many. He was one of those remarkable individuals who could make stellar scientific breakthroughs and lead major institutions with extreme thoughtfulness and grace. He will be missed by the whole scientific community.”
“David was a wise giant. He was brilliant. He was an extraordinarily effective, ethical leader and institution builder who influenced and inspired generations of scientists and premier institutions,” says Susan Whitehead, member of the board of directors and daughter of Jack Whitehead.
Gerald R. Fink, the Margaret and Herman Sokol Professor Emeritus at MIT who was recruited by Baltimore from Cornell University as one of four founding members of the Whitehead Institute, and who succeeded him as director in 1990, observes: “David became my hero and friend. He upheld the highest scientific ideals and instilled trust and admiration in all around him.”
Baltimore was born in New York City in 1938. His scientific career began at Swarthmore College, where he earned a bachelor’s degree with high honors in chemistry in 1960. He then began doctoral studies in biophysics at MIT, but in 1961 shifted his focus to animal viruses and moved to what is now the Rockefeller University, where he did his thesis work in the lab of Richard Franklin.
After completing postdoctoral fellowships with James Darnell at MIT and Jerard Hurwitz at the Albert Einstein College of Medicine, Baltimore launched his own lab at the Salk Institute for Biological Studies from 1965 to 1968. Then, in 1968, he returned to MIT as a member of its biology faculty, where he remained until 1990. (Whitehead Institute’s members hold parallel appointments as faculty in the MIT Department of Biology.)
In 1990, Baltimore left the Whitehead Institute and MIT to become the president of Rockefeller University. He returned to MIT from 1994 to 1997, serving as an Institute Professor, after which he was named president of Caltech. Baltimore held that position until 2006, when he was elected to a three-year term as president of the American Association for the Advancement of Science.
For decades, Baltimore has been viewed not just as a brilliant scientist and talented academic leader, but also as a wise counsel to the scientific community. For example, he helped organize the 1975 Asilomar Conference on Recombinant DNA, which created stringent safety guidelines for the study and use of recombinant DNA technology. He played a leadership role in the development of policies on AIDS research and treatment, and on genomic editing. Serving as an advisor to both organizations and individual scientists, he helped to shape the strategic direction of dozens of institutions and to advance the careers of generations of researchers. As Founding Member Robert Weinberg summarizes it, “He had no tolerance for nonsense and weak science.”
In 2023, the Whitehead Institute established the endowed David Baltimore Chair in Biomedical Research, honoring Baltimore’s six decades of scientific, academic, and policy leadership and his impact on advancing innovative basic biomedical research.
“David was a visionary leader in science and the institutions that sustain it. He devoted his career to advancing scientific knowledge and strengthening the communities that make discovery possible, and his leadership of Whitehead Institute exemplified this,” says Richard Young, MIT professor of biology and Whitehead Institute member. “David approached life with keen observation, boundless curiosity, and a gift for insight that made him both a brilliant scientist and a delightful companion. His commitment to mentoring and supporting young scientists left a lasting legacy, inspiring the next generation to pursue impactful contributions to biomedical research. Many of us found in him not only a mentor and role model, but also a steadfast friend whose presence enriched our lives and whose absence will be profoundly felt.”
The squishy material can be loaded with anti-inflammatory drugs that are released in response to small changes in pH in the body. During an arthritis flare-up, a joint becomes inflamed and slightly more acidic than the surrounding tissue.
The material, developed by researchers at the University of Cambridge, has been designed to respond to this natural change in pH. As acidity increases, the material becomes softer and more jelly-like, triggering the release of drug molecules that can be encapsulated within its structure.
Since the material is designed to respond only within a narrow pH range, the team say that drugs could be released precisely where and when they are needed, potentially reducing side effects.
If used as an artificial cartilage in arthritic joints, this approach could allow for the continuous treatment of arthritis, improving the efficacy of drugs to relieve pain and fight inflammation. Arthritis affects more than 10 million people in the UK, costing the NHS an estimated £10.2 billion annually. Worldwide, it is estimated to affect over 600 million people.
While extensive clinical trials are needed before the material can be used in patients, the researchers say their approach could improve outcomes for people with arthritis, and for those with other conditions including cancer. Their results are reported in the Journal of the American Chemical Society.
The material developed by the Cambridge team uses specially engineered and reversible crosslinks within a polymer network. The sensitivity of these links to changes in acidity levels gives the material highly responsive mechanical properties.
The material was developed in Professor Oren Scherman’s research group in Cambridge’s Yusuf Hamied Department of Chemistry. The group specialises in designing and building these unique materials for a range of potential applications.
“For a while now, we’ve been interested in using these materials in joints, since their properties can mimic those of cartilage,” said Scherman, who is Professor of Supramolecular and Polymer Chemistry and Director of the Melville Laboratory for Polymer Synthesis. “But to combine that with highly targeted drug delivery is a really exciting prospect.”
“These materials can ‘sense’ when something is wrong in the body and respond by delivering treatment right where it’s needed,” said first author Dr Stephen O’Neill. “This could reduce the need for repeated doses of drugs, while improving patient quality of life.”
Unlike many drug delivery systems that require external triggers such as heat or light, this one is powered by the body’s own chemistry. The researchers say this could pave the way for longer-lasting, targeted arthritis treatments that automatically respond to flare-ups, boosting effectiveness while reducing harmful side effects.
In laboratory tests, researchers loaded the material with a fluorescent dye to mimic how a real drug might behave. They found that at acidity levels typical of an arthritic joint, the material released substantially more drug cargo compared with normal, healthy pH levels.
“By tuning the chemistry of these gels, we can make them highly sensitive to the subtle shifts in acidity that occur in inflamed tissue,” said co-author Dr Jade McCune. “That means drugs are released when and where they are needed most.”
The researchers say the approach could be tailored to a range of medical conditions, by fine-tuning the chemistry of the material. “It’s a highly flexible approach, so we could in theory incorporate both fast-acting and slow-acting drugs, and have a single treatment that lasts for days, weeks or even months,” said O’Neill.
The team’s next steps will involve testing the materials in living systems to evaluate their performance and safety in a physiological environment. The team say that if successful, their approach could open the door to a new generation of responsive biomaterials capable of treating chronic diseases with greater precision.
The research was supported by the European Research Council and the Engineering and Physical Sciences Research Council (EPSRC), part of UK Research and Innovation (UKRI). The research is being commercialised with the support of Cambridge Enterprise, the University’s innovation arm. Oren Scherman is a Fellow of Jesus College, Cambridge.
Researchers have developed a material that can sense tiny changes within the body, such as during an arthritis flare-up, and release drugs exactly where and when they are needed.
A pair of disembodied hands thrusts a cake topped with candles through an open window. A nearby menu announces only one course: cold boiled owl. A figure glances worriedly at an accumulation of boxes of chocolate. The caption: “The Horror of Having a Birthday.”
The offbeat illustration is one of several unpublished works by the late artist Edward Gorey ’50, on display in a new exhibit at Houghton Library.
“Edward Gorey: The Gloomy Gallery” playfully engages with Gorey’s foreboding yet oddly cozy imagination, with its world-weary malaise, and its whimsical embrace of the nonsensical.
“Edward Gorey is an incredibly special figure, and we are very happy to celebrate him on the 100th year since his birth,” Molly Schwartzburg, Philip Hofer Curator of Printing and Graphic Arts at Houghton Library, said at an opening reception on Sept. 4.
The exhibition, which is free and open to the public, marks Gorey’s 100th birthday and the 75th anniversary of his graduation from Harvard College. The pieces are drawn from the Houghton Library’s extensive holdings on the artist and span his career.
Edward Gorey. “The Horror of Having a Birthday,” around 1948–1955. From the estate of Anthony N. and Ann Smith. Houghton Library, Harvard University.
Of note are recently acquired works, including the “Birthday” piece, which Gorey gave to Tony Smith, his Harvard classmate and Eliot House roommate. The illustrations were passed down to Smith’s daughter, Barbie Selby, who attended the opening reception.
“Molly and Maggie did a great job mixing my dad’s art with the art that’s been printed or published,” Selby said, referring to Schwartzburg and co-curator Maggie Erwin, curatorial assistant in the printing and graphic arts department. “It’s been magical.”
The newly acquired pieces offer glimpses into the artist’s life at Harvard.
“We see evidence in these new drawings of his French literature classes, of the experiences of World War II veterans at Harvard, of architecture around Harvard Square, and of a Gorey friendship outside the rarified arty circles that have been written about for so long,” Schwartzburg said. “We also can see just how early on he established his distinctive aesthetic and flair for linguistic play. Harvard seems to have been a profoundly fruitful environment for him as a young artist.”
“We also can see just how early on he established his distinctive aesthetic and flair for linguistic play. Harvard seems to have been a profoundly fruitful environment for him as a young artist.”
Molly Schwartzburg
Gorey famously roomed with poet Frank O’Hara, but much less is known about his friendship with Smith, his senior year roommate.
The two were not a natural pairing. Smith was a Phillips Exeter Academy alumnus, the scion of a wealth Republican family from Fall River, Mass. Smith concentrated in economics and would later spend his career in the insurance industry. He lived in Raleigh, N.C., for 50 years, according to his obituary.
Edward Gorey. “Halfway House,” around 1948–1955. From the estate of Anthony N. and Ann Smith. Houghton Library, Harvard University.
Gorey, a Chicago native, was considered something of an artistic eccentric who was entrenched in Harvard’s queer literary scene. Gorey would later move to New York and settled on Cape Cod in the 1980s. He died there in 2000.
Both men served in the military during World War II.
The two had a mutual friend and started periodically hanging out together in junior year. Schwartzburg said there isn’t a tremendous amount known about their ties. They apparently shared an interest in beachcombing and thrifting — Gorey’s biographer Mark Dery noted the two made weekly pilgrimages to Filene’s Basement.
Smith showed up frequently in Gorey’s artwork as a befuddled-looking figure with an elongated face. According to Smith’s daughter Selby, friends at the time speculated that Gorey, who was famously cagey about his sexuality, might have been infatuated with his roommate.
“I don’t know if that’s true,” Selby said. “My dad would have been oblivious to it.”
Edward Gorey. “The picture wasn’t, after all, me,” around 1948–1955. From the estate of Anthony N. and Ann Smith. Houghton Library, Harvard University.
A prodigious writer and illustrator, Gorey authored 116 books and is estimated to have created cover art for more than 500 books by other authors.
Among his most famous works is “The Gashlycrumb Tinies: A Very Gorey Alphabet Book,” in which each letter corresponds to a child perishing in darkly comic and sometimes surreal ways.
Gorey influenced countless contemporary writers, artists, and film directors, Schwartzburg said.
“I think that one of the reasons his popularity is so enduring is that his work feels untethered from his specific time and place. By setting his work in a sort of tweaked version of the past, he has somehow prevented it from ever becoming outdated.”
The exhibit revels in Gorey’s odd obsessions: balancing bicycles, Victorian vestments, furtive figures, and, of course, artful alliterations.
It also displays original materials from Houghton’s holdings from the Poets’ Theatre, the Cambridge organization that Gorey, along with O’Hara and a group of fellow Harvard and Radcliffe alumni, founded shortly after his graduation.
“Edward Gorey: The Gloomy Gallery” is open in the Edison and Newman Room in Houghton Library through Jan. 12. As Schwartzburg put it, “If you feel like you want to get gloomy, or if you’re feeling gloomy and you need a little lift, this is the place.”
Most people recognize Alzheimer’s disease from its devastating symptoms such as memory loss, while new drugs target pathological aspects of disease manifestations, such as plaques of amyloid proteins. Now, a sweeping new open-access study in the Sept. 4 edition of Cell by MIT researchers shows the importance of understanding the disease as a battle over how well brain cells control the expression of their genes. The study paints a high-resolution picture of a desperate struggle to maintain healthy gene expression and gene regulation, where the consequences of failure or success are nothing less than the loss or preservation of cell function and cognition.
The study presents a first-of-its-kind, multimodal atlas of combined gene expression and gene regulation spanning 3.5 million cells from six brain regions, obtained by profiling 384 post-mortem brain samples across 111 donors. The researchers profiled both the “transcriptome,” showing which genes are expressed into RNA, and the “epigenome,” the set of chromosomal modifications that establish which DNA regions are accessible and thus utilized between different cell types.
The resulting atlas revealed many insights showing that the progression of Alzheimer’s is characterized by two major epigenomic trends. The first is that vulnerable cells in key brain regions suffer a breakdown of the rigorous nuclear “compartments” they normally maintain to ensure some parts of the genome are open for expression but others remain locked away. The second major finding is that susceptible cells experience a loss of “epigenomic information,” meaning they lose their grip on the unique pattern of gene regulation and expression that gives them their specific identity and enables their healthy function.
Accompanying the evidence of compromised compartmentalization and the erosion of epigenomic information are many specific findings pinpointing molecular circuitry that breaks down by cell type, by region, and gene network. They found, for instance, that when epigenomic conditions deteriorate, that opens the door to expression of many genes associated with disease, whereas if cells manage to keep their epigenomic house in order, they can keep disease-associated genes in check. Moreover, the researchers clearly saw that when the epigenomic breakdowns were occurring people lost cognitive ability, but where epigenomic stability remained, so did cognition.
“To understand the circuitry, the logic responsible for gene expression changes in Alzheimer’s disease [AD], we needed to understand the regulation and upstream control of all the changes that are happening, and that’s where the epigenome comes in,” says senior author Manolis Kellis, a professor in the Computer Science and Artificial Intelligence Lab and head of MIT’s Computational Biology Group. “This is the first large-scale, single-cell, multi-region gene-regulatory atlas of AD, systematically dissecting the dynamics of epigenomic and transcriptomic programs across disease progression and resilience.”
By providing that detailed examination of the epigenomic mechanisms of Alzheimer’s progression, the study provides a blueprint for devising new Alzheimer’s treatments that can target factors underlying the broad erosion of epigenomic control or the specific manifestations that affect key cell types such as neurons and supporting glial cells.
“The key to developing new and more effective treatments for Alzheimer’s disease depends on deepening our understanding of the mechanisms that contribute to the breakdowns of cellular and network function in the brain,” says Picower Professor and co-corresponding author Li-Huei Tsai, director of The Picower Institute for Learning and Memory and a founding member of MIT’s Aging Brain Initiative, along with Kellis. “This new data advances our understanding of how epigenomic factors drive disease.”
Kellis Lab members Zunpeng Liu and Shanshan Zhang are the study’s co-lead authors.
Compromised compartments and eroded information
Among the post-mortem brain samples in the study, 57 came from donors to the Religious Orders Study or the Rush Memory and Aging Project (collectively known as “ROSMAP”) who did not have AD pathology or symptoms, while 33 came from donors with early-stage pathology and 21 came from donors at a late stage. The samples therefore provided rich information about the symptoms and pathology each donor was experiencing before death.
In the new study, Liu and Zhang combined analyses of single-cell RNA sequencing of the samples, which measures which genes are being expressed in each cell, and ATACseq, which measures whether chromosomal regions are accessible for gene expression. Considered together, these transcriptomic and epigenomic measures enabled the researchers to understand the molecular details of how gene expression is regulated across seven broad classes of brain cells (e.g., neurons or other glial cell types) and 67 subtypes of cell types (e.g., 17 kinds of excitatory neurons or six kinds of inhibitory ones).
The researchers annotated more than 1 million gene-regulatory control regions that different cells employ to establish their specific identities and functionality using epigenomic marking. Then, by comparing the cells from Alzheimer’s brains to the ones without, and accounting for stage of pathology and cognitive symptoms, they could produce rigorous associations between the erosion of these epigenomic markings, and ultimately loss of function.
For instance, they saw that among people who advanced to late-stage AD, normally repressive compartments opened up for more expression and compartments that were normally more open during health became more repressed. Worryingly, when the normally repressive compartments of brain cells opened up, they became more afflicted with disease.
“For Alzheimer’s patients, repressive compartments opened up, and gene expression levels increased, which was associated with decreased cognitive function,” explains Liu.
But when cells managed to keep their compartments in order such that they expressed the genes they were supposed to, people remained cognitively intact.
Meanwhile, based on the cells’ expression of their regulatory elements, the researchers created an epigenomic information score for each cell. Generally, information declined as pathology progressed, but that was particularly notable among cells in the two brain regions affected earliest in Alzheimer’s: the entorhinal cortex and the hippocampus. The analyses also highlighted specific cell types that were especially vulnerable including microglia that play immune and other roles, oligodendrocytes that produce myelin insulation for neurons, and particular kinds of excitatory neurons.
Risk genes and “chromatin guardians”
Detailed analyses in the paper highlighted how epigenomic regulation tracked with disease-related problems, Liu notes. The e4 variant of the APOE gene, for instance, is widely understood to be the single biggest genetic risk factor for Alzheimer’s. In APOE4 brains, microglia initially responded to the emerging disease pathology with an increase in their epigenomic information, suggesting that they were stepping up to their unique responsibility to fight off disease. But as the disease progressed, the cells exhibited a sharp drop off in information, a sign of deterioration and degeneration. This turnabout was strongest in people who had two copies of APOE4, rather than just one. The findings, Kellis said, suggest that APOE4 might destabilize the genome of microglia, causing them to burn out.
Another example is the fate of neurons expressing the gene RELN and its protein Reelin. Prior studies, including by Kellis and Tsai, have shown that RELN- expressing neurons in the entorhinal cortex and hippocampus are especially vulnerable in Alzheimer’s, but promote resilience if they survive. The new study sheds new light on their fate by demonstrating that they exhibit early and severe epigenomic information loss as disease advances, but that in people who remained cognitively resilient the neurons maintained epigenomic information.
In yet another example, the researchers tracked what they colloquially call “chromatin guardians” because their expression sustains and regulates cells’ epigenomic programs. For instance, cells with greater epigenomic erosion and advanced AD progression displayed increased chromatin accessibility in areas that were supposed to be locked down by Polycomb repression genes or other gene expression silencers. While resilient cells expressed genes promoting neural connectivity, epigenomically eroded cells expressed genes linked to inflammation and oxidative stress.
“The message is clear: Alzheimer’s is not only about plaques and tangles, but about the erosion of nuclear order itself,” Kellis says. “Cognitive decline emerges when chromatin guardians lose ground to the forces of erosion, switching from resilience to vulnerability at the most fundamental level of genome regulation.
“And when our brain cells lose their epigenomic memory marks and epigenomic information at the lowest level deep inside our neurons and microglia, it seems that Alheimer’s patients also lose their memory and cognition at the highest level.”
Other authors of the paper are Benjamin T. James, Kyriaki Galani, Riley J. Mangan, Stuart Benjamin Fass, Chuqian Liang, Manoj M. Wagle, Carles A. Boix, Yosuke Tanigawa, Sukwon Yun, Yena Sung, Xushen Xiong, Na Sun, Lei Hou, Martin Wohlwend, Mufan Qiu, Xikun Han, Lei Xiong, Efthalia Preka, Lei Huang, William F. Li, Li-Lun Ho, Amy Grayson, Julio Mantero, Alexey Kozlenkov, Hansruedi Mathys, Tianlong Chen, Stella Dracheva, and David A. Bennett.
Funding for the research came from the National Institutes of Health, the National Science Foundation, the Cure Alzheimer’s Fund, the Freedom Together Foundation, the Robert A. and Renee E. Belfer Family Foundation, Eduardo Eurnekian, and Joseph P. DiSabato.
MIT researchers analyzed a massive dataset of gene expression and regulation measures to better understand how the human brain's control of gene expression is affected by Alzheimer's disease. They found both broad trends and specific mechanisms by which the control becomes compromised and eroded.
Most people recognize Alzheimer’s disease from its devastating symptoms such as memory loss, while new drugs target pathological aspects of disease manifestations, such as plaques of amyloid proteins. Now, a sweeping new open-access study in the Sept. 4 edition of Cell by MIT researchers shows the importance of understanding the disease as a battle over how well brain cells control the expression of their genes. The study paints a high-resolution picture of a desperate struggle to maintain healthy gene expression and gene regulation, where the consequences of failure or success are nothing less than the loss or preservation of cell function and cognition.
The study presents a first-of-its-kind, multimodal atlas of combined gene expression and gene regulation spanning 3.5 million cells from six brain regions, obtained by profiling 384 post-mortem brain samples across 111 donors. The researchers profiled both the “transcriptome,” showing which genes are expressed into RNA, and the “epigenome,” the set of chromosomal modifications that establish which DNA regions are accessible and thus utilized between different cell types.
The resulting atlas revealed many insights showing that the progression of Alzheimer’s is characterized by two major epigenomic trends. The first is that vulnerable cells in key brain regions suffer a breakdown of the rigorous nuclear “compartments” they normally maintain to ensure some parts of the genome are open for expression but others remain locked away. The second major finding is that susceptible cells experience a loss of “epigenomic information,” meaning they lose their grip on the unique pattern of gene regulation and expression that gives them their specific identity and enables their healthy function.
Accompanying the evidence of compromised compartmentalization and the erosion of epigenomic information are many specific findings pinpointing molecular circuitry that breaks down by cell type, by region, and gene network. They found, for instance, that when epigenomic conditions deteriorate, that opens the door to expression of many genes associated with disease, whereas if cells manage to keep their epigenomic house in order, they can keep disease-associated genes in check. Moreover, the researchers clearly saw that when the epigenomic breakdowns were occurring people lost cognitive ability, but where epigenomic stability remained, so did cognition.
“To understand the circuitry, the logic responsible for gene expression changes in Alzheimer’s disease [AD], we needed to understand the regulation and upstream control of all the changes that are happening, and that’s where the epigenome comes in,” says senior author Manolis Kellis, a professor in the Computer Science and Artificial Intelligence Lab and head of MIT’s Computational Biology Group. “This is the first large-scale, single-cell, multi-region gene-regulatory atlas of AD, systematically dissecting the dynamics of epigenomic and transcriptomic programs across disease progression and resilience.”
By providing that detailed examination of the epigenomic mechanisms of Alzheimer’s progression, the study provides a blueprint for devising new Alzheimer’s treatments that can target factors underlying the broad erosion of epigenomic control or the specific manifestations that affect key cell types such as neurons and supporting glial cells.
“The key to developing new and more effective treatments for Alzheimer’s disease depends on deepening our understanding of the mechanisms that contribute to the breakdowns of cellular and network function in the brain,” says Picower Professor and co-corresponding author Li-Huei Tsai, director of The Picower Institute for Learning and Memory and a founding member of MIT’s Aging Brain Initiative, along with Kellis. “This new data advances our understanding of how epigenomic factors drive disease.”
Kellis Lab members Zunpeng Liu and Shanshan Zhang are the study’s co-lead authors.
Compromised compartments and eroded information
Among the post-mortem brain samples in the study, 57 came from donors to the Religious Orders Study or the Rush Memory and Aging Project (collectively known as “ROSMAP”) who did not have AD pathology or symptoms, while 33 came from donors with early-stage pathology and 21 came from donors at a late stage. The samples therefore provided rich information about the symptoms and pathology each donor was experiencing before death.
In the new study, Liu and Zhang combined analyses of single-cell RNA sequencing of the samples, which measures which genes are being expressed in each cell, and ATACseq, which measures whether chromosomal regions are accessible for gene expression. Considered together, these transcriptomic and epigenomic measures enabled the researchers to understand the molecular details of how gene expression is regulated across seven broad classes of brain cells (e.g., neurons or other glial cell types) and 67 subtypes of cell types (e.g., 17 kinds of excitatory neurons or six kinds of inhibitory ones).
The researchers annotated more than 1 million gene-regulatory control regions that different cells employ to establish their specific identities and functionality using epigenomic marking. Then, by comparing the cells from Alzheimer’s brains to the ones without, and accounting for stage of pathology and cognitive symptoms, they could produce rigorous associations between the erosion of these epigenomic markings, and ultimately loss of function.
For instance, they saw that among people who advanced to late-stage AD, normally repressive compartments opened up for more expression and compartments that were normally more open during health became more repressed. Worryingly, when the normally repressive compartments of brain cells opened up, they became more afflicted with disease.
“For Alzheimer’s patients, repressive compartments opened up, and gene expression levels increased, which was associated with decreased cognitive function,” explains Liu.
But when cells managed to keep their compartments in order such that they expressed the genes they were supposed to, people remained cognitively intact.
Meanwhile, based on the cells’ expression of their regulatory elements, the researchers created an epigenomic information score for each cell. Generally, information declined as pathology progressed, but that was particularly notable among cells in the two brain regions affected earliest in Alzheimer’s: the entorhinal cortex and the hippocampus. The analyses also highlighted specific cell types that were especially vulnerable including microglia that play immune and other roles, oligodendrocytes that produce myelin insulation for neurons, and particular kinds of excitatory neurons.
Risk genes and “chromatin guardians”
Detailed analyses in the paper highlighted how epigenomic regulation tracked with disease-related problems, Liu notes. The e4 variant of the APOE gene, for instance, is widely understood to be the single biggest genetic risk factor for Alzheimer’s. In APOE4 brains, microglia initially responded to the emerging disease pathology with an increase in their epigenomic information, suggesting that they were stepping up to their unique responsibility to fight off disease. But as the disease progressed, the cells exhibited a sharp drop off in information, a sign of deterioration and degeneration. This turnabout was strongest in people who had two copies of APOE4, rather than just one. The findings, Kellis said, suggest that APOE4 might destabilize the genome of microglia, causing them to burn out.
Another example is the fate of neurons expressing the gene RELN and its protein Reelin. Prior studies, including by Kellis and Tsai, have shown that RELN- expressing neurons in the entorhinal cortex and hippocampus are especially vulnerable in Alzheimer’s, but promote resilience if they survive. The new study sheds new light on their fate by demonstrating that they exhibit early and severe epigenomic information loss as disease advances, but that in people who remained cognitively resilient the neurons maintained epigenomic information.
In yet another example, the researchers tracked what they colloquially call “chromatin guardians” because their expression sustains and regulates cells’ epigenomic programs. For instance, cells with greater epigenomic erosion and advanced AD progression displayed increased chromatin accessibility in areas that were supposed to be locked down by Polycomb repression genes or other gene expression silencers. While resilient cells expressed genes promoting neural connectivity, epigenomically eroded cells expressed genes linked to inflammation and oxidative stress.
“The message is clear: Alzheimer’s is not only about plaques and tangles, but about the erosion of nuclear order itself,” Kellis says. “Cognitive decline emerges when chromatin guardians lose ground to the forces of erosion, switching from resilience to vulnerability at the most fundamental level of genome regulation.
“And when our brain cells lose their epigenomic memory marks and epigenomic information at the lowest level deep inside our neurons and microglia, it seems that Alheimer’s patients also lose their memory and cognition at the highest level.”
Other authors of the paper are Benjamin T. James, Kyriaki Galani, Riley J. Mangan, Stuart Benjamin Fass, Chuqian Liang, Manoj M. Wagle, Carles A. Boix, Yosuke Tanigawa, Sukwon Yun, Yena Sung, Xushen Xiong, Na Sun, Lei Hou, Martin Wohlwend, Mufan Qiu, Xikun Han, Lei Xiong, Efthalia Preka, Lei Huang, William F. Li, Li-Lun Ho, Amy Grayson, Julio Mantero, Alexey Kozlenkov, Hansruedi Mathys, Tianlong Chen, Stella Dracheva, and David A. Bennett.
Funding for the research came from the National Institutes of Health, the National Science Foundation, the Cure Alzheimer’s Fund, the Freedom Together Foundation, the Robert A. and Renee E. Belfer Family Foundation, Eduardo Eurnekian, and Joseph P. DiSabato.
MIT researchers analyzed a massive dataset of gene expression and regulation measures to better understand how the human brain's control of gene expression is affected by Alzheimer's disease. They found both broad trends and specific mechanisms by which the control becomes compromised and eroded.
The Graduate School kicked off its yearlong 125th Anniversary celebration, which will explore the school’s impact throughout its history, at the annual two-day orientation.
Study finds mealtimes may impact health, longevity in older adults
Researchers studied changes to meal timing in older adults and discovered people experience gradual shifts as they age. They also found characteristics that may contribute to mealtime shifts and revealed specific traits linked to an earlier death.
Results from the Mass General Brigham study are published in Communications Medicine
“Our research suggests that changes in when older adults eat, especially the timing of breakfast, could serve as an easy-to-monitor marker of their overall health status,” said lead author Hassan Dashti, a nutrition scientist and circadian biologist at Harvard-affiliated Massachusetts General Hospital and assistant professor of anesthesia at Harvard Medical School.
“Patients and clinicians can possibly use shifts in mealtime routines as an early warning sign to look into underlying physical and mental health issues. Also, encouraging older adults in having consistent meal schedules could become part of broader strategies to promoting healthy aging and longevity,” Dashti said.
Later breakfast time was consistently associated with having physical and mental health conditions such as depression, fatigue, and oral health problems.
Dashti and his colleagues — including senior author Altug Didikoglu of the Izmir Institute of Technology in Turkey — examined key aspects of meal timing that are significant for aging populations to determine whether certain patterns might signal, or even influence, health outcomes later in life. The research team analyzed data, including blood samples, from 2,945 community-dwelling adults in the UK aged 42–94 years old who were followed for more than 20 years. They found that as older adults age, they tend to eat breakfast and dinner at later times, while also narrowing the overall time window in which they eat each day.
Later breakfast time was consistently associated with having physical and mental health conditions such as depression, fatigue, and oral health problems. Difficulty with meal preparation and worse sleep were also linked with later mealtimes. Notably, later breakfast timing was associated with an increased risk of death during follow-up. Individuals genetically predisposed to characteristics associated with being a “night owl” (preferring later sleep and wake times) tended to eat meals at later times.
“Up until now, we had a limited insight into how the timing of meals evolves later in life and how this shift relates to overall health and longevity,” said Dashti. “Our findings help fill that gap by showing that later meal timing, especially delayed breakfast, is tied to both health challenges and increased mortality risk in older adults. These results add new meaning to the saying that ‘breakfast is the most important meal of the day,’ especially for older individuals.”
Dashti noted that this has important implications as time-restricted eating and intermittent fasting gain popularity, where the health impacts of shifting meal schedules may differ significantly in aging populations from those in younger adults.
This study was supported by the National Institutes of Health.
Scholars from business, economics, healthcare, and policy offer insights into areas that deserve close look
Sy Boles
Harvard Staff Writer
long read
The pace of AI development is surging, and the effects on the economy, education, medicine, research, jobs, law, and lifestyle will be far-reaching and pervasive. Moves to begin regulation are surfacing on the federal and state level.
President Trump in July unveiled executive orders and an A.I. action plan intended to speed the development of artificial intelligence and cement the U.S. as the global leader in the technology.
The suite of changes bars the federal government from buying AI tools it considers ideologically biased; eases restrictions on the permitting process for new AI infrastructure projects; and promotes the export of American AI products around the world, among other developments.
The National Conference of State Legislatures reports that in the 2025 session all 50 states considered AI-related measures.
Campus researchers across a series of fields offer their takes on areas that deserve a look.
Photo illustrations by Liz Zonarich/Harvard Staff
Risks of illegal scams, price-fixing collusion
Eugene Soltes is the McLean Family Professor of Business Administration at Harvard Business School.
As artificial intelligence becomes more ubiquitous within the infrastructure of business and finance, we’re quickly seeing the potential for unprecedented risks that our legal frameworks and corporate institutions are unprepared to address.
Consider algorithmic pricing. Companies deploying AI to optimize profits can already witness bots independently “learn” that price collusion yields higher returns. When firms’ algorithms tacitly coordinate to inflate prices, who bears responsibility — the companies, software vendors, or engineers? Current antitrust practice offers no clear answer.
The danger compounds when AI’s optimization power targets human behavior directly.
Research confirms that AI already has persuasive capabilities that outperform skilled negotiators. Applied to vulnerable populations, AI transforms traditional scams into bespoke, AI-tailored schemes.
“Pig-butchering frauds” [where perpetrators build trust of victims over time] that once required teams of human operators can be automated, personalized, and deployed en masse, deceiving even the most scrupulous of us with deep-fake audio and video.
Research confirms that AI already has persuasive capabilities that outperform skilled negotiators. Applied to vulnerable populations, AI transforms traditional scams into bespoke, AI-tailored schemes.
Most alarming is the prospect of AI agents with direct access to financial systems, particularly cryptocurrency networks.
Consider an AI agent given access to a cryptocurrency wallet and instructed to “grow its portfolio.” Unlike traditional banking where transactions can be frozen and reversed, once an AI deploys a fraudulent smart contract or initiates a harmful transaction, no authority can stop it.
The combination of immutable smart contracts and autonomous crypto payments creates extraordinary possibilities — including automated bounty systems for real-world violence that execute without human intervention.
These scenarios aren’t distant speculation; they’re emerging realities our current institutions cannot adequately prevent or prosecute. Yet solutions exist: enhanced crypto monitoring, mandatory kill switches for AI agents, and human-in-the-loop requirements for models.
Addressing these challenges demands collaboration between innovators who design AI technology and governments empowered to limit its potential for harm.
The question isn’t whether these risks will materialize, but whether we’ll act before they do.
Choosing path of pluralism
Danielle Allen is the James Bryant Conant University Professor and the Director of the Democratic Knowledge Project and the Allen Lab for Democracy Renovation at the Harvard Kennedy School.
Photo by Melissa Blackall
As those at my HKS lab, the Allen Lab for Democracy Renovation, see it, three paradigms for governing AI currently exist in the global landscape: an accelerationist paradigm, an effective altruism paradigm, and a pluralism paradigm.
On the accelerationist paradigm, the goal is to move fast and break things, speeding up technological development as much as possible so that we get to new solutions to global problems (from labor to climate change), while maximally organizing the world around the success of high IQ individuals.
Labor is replaced; the Earth is made non-necessary via access to Mars; smart people use tech-fueled genetic selection to produce even smarter babies.
On the effective altruism paradigm, there is equally a goal to move fast and break things but also a recognition that replacing human labor with tech will damage the vast mass of humanity, so the commitment to tech development goes hand in hand with a plan to redistribute the productivity gains that flow to tech companies with comparatively small labor forces to the rest of humanity via universal basic income policies.
On the pluralism paradigm, technology development is focused not on overmatching and replacing human intelligence but on complementing and extending the multiple or plural kinds of human intelligence with equally plural kinds of machine intelligence.
The purpose here is to activate and extend human pluralism for the goods of creativity, innovation, and cultural richness, while fully integrating the broad population into the productive economy.
Pennsylvania’s recent commitment to deploy technology in ways that empower rather than replace humans is an example, as is Utah’s recently passed Digital Choice Act, which places ownership of data in social media platforms back in the hands of individual users and demands interoperability of platforms, shifting power from tech corporations to citizens and consumers.
If the U.S. wants to win the AI race as the kind of society we are — a free society of free and equal self-governing citizens — then we really do need to pursue the third paradigm. Let’s not discard democracy and freedom when we toss out “woke” ideology.
Guardrails for mental health advice, support
Ryan McBain is an assistant professor at Harvard Medical School and a senior policy researcher at RAND.
As more people — including teens — turn to AI for mental health advice and emotional support, regulation should do two things: reduce harm and promote timely access to evidence-based resources. People will not stop asking chatbots sensitive questions. Policy should make those interactions safer and more useful, not attempt to vanquish them.
Some guardrails already exist.
Systems like ChatGPT and Claude often refuse “very high-risk” suicide prompts and route users to the 988 Suicide & Crisis Lifeline.
Yet many scenarios are nuanced. Framed as learning survival knots for a camping trip, a chatbot might describe how to tie a noose; framed as slimming for a wedding, it might suggest tactics for a crash diet.
Regulatory priorities should reflect the level of nuance of this new technology.
People will not stop asking chatbots sensitive questions. Policy should make those interactions safer and more useful.
First, require standardized, clinician-anchored benchmarks for suicide-related prompts — with public reporting. Benchmarks should include multi-turn (back-and-forth) dialogues that supply enough context to test the sorts of nuances described above, in which chatbots can be coaxed across a red line.
Second, strengthen crisis routing: with up-to-date 988 information, geolocated resources, and “support-plus-safety” templates that validate individuals’ emotions, encourage help-seeking, and avoid detailed means of harm information.
Third, enforce privacy. Prohibit advertising and profiling around mental-health interactions, minimize data retention, and require a “transient memory” mode for sensitive queries.
Fourth, tie claims to evidence. If a model is marketed for mental health support, it should meet a duty-of-care standard — through pre-deployment evaluation, post-deployment monitoring, independent audits, and alignment with risk-management frameworks.
Fifth, the administration should fund independent research through NIH and similar channels so safety tests keep pace with model updates.
We are still early enough in the AI era to set a high floor — benchmarks, privacy standards, and crisis routing — while promoting transparency through audits and reporting.
Regulators can also reward performance: for instance, by allowing systems that meet strict thresholds to offer more comprehensive mental-health functions such as clinical decision support.
Embrace global collaboration
David Yang is an economics professor and director of the Center for History and Economics at Harvard, whose work draws lessons from China.
File photo by Niles Singer/Harvard Staff Photographer
Current policies on AI are heavily influenced by a narrative of geopolitical competition, often perceived as zero-sum or even negative-sum. It’s crucial to challenge this perspective and recognize the immense potential, and arguably necessity, for global collaboration in this technological domain.
The history of AI development, with its notably international leading teams, exemplifies such collaboration. For instance, framing AI as a dual-use technology can hinder coordination on global AI safety frameworks and dialogues.
My collaborators and I are researching how narratives around technology have evolved over decades, aiming to understand the dynamics and forces, particularly how competitive narratives emerge and influence policymaking.
Second, U.S. AI strategy has recently concentrated on maintaining American dominance in innovation and the global market.
However, AI products developed in one innovation hub may not be suitable for all global applications. In a recent paper with my colleague Josh Lerner at HBS and collaborators, we show that China’s emergence as a major innovation hub has spurred innovation and entrepreneurship in other emerging markets, offering solutions more appropriate to local conditions than those solely benchmarked against the U.S.
Therefore, striking a balance is crucial: preserving U.S. AI innovation and technological leadership while fostering local collaborations and entrepreneurship. This approach ensures AI technology, its applications, and the general direction of innovation are relevant to local contexts and reach a global audience.
Paradoxically, ceding more control could, in my view, consolidate technology and market power for U.S. AI innovators.
Encourage accountability as well as innovation
Paulo Carvão is senior fellow at the Mossavar-Rahmani Center for Business and Government at the Harvard Kennedy School who researches AI regulation in the U.S.
Photo courtesy of Paulo Carvão
The Trump administration’s AI Action Plan marks a shift from cautious regulation to industrial acceleration. Framed as a rallying cry for American tech dominance, the plan bets on private-sector leadership to drive innovation, global adoption, and economic growth.
Previous technologies, such as internet platforms and social media, evolved without governance frameworks. Instead, policymakers from the 1990s through the 2010s made a deliberate decision to let the industry grow unregulated and protected against liability.
If we want the world to trust American-made AI, we must ensure it earns that trust, at home and abroad.
AI’s rapid adoption is taking place amid heightened awareness of the societal implications of the previous technology waves. However, the industry and its main investors advocate for implementing a similar playbook, one that is light on safeguards and rich in incentives.
What is most unusual about the recently announced strategy is what it is missing. It dismisses guardrails as barriers to innovation, placing trust in market forces and voluntary action.
That may attract investment, but it leaves critical questions unanswered: Who ensures fairness in algorithmic decision-making? How do we protect workers displaced by automation? What happens when infrastructure investment prioritizes computing power over community impact?
Still, the plan gets some things right. It recognizes AI as a full-stack challenge, from chips to models to standards, and takes seriously the need for U.S. infrastructure and workforce development. Its international strategy offers a compelling framework for global leadership.
Ultimately, innovation and accountability do not need to be trade-offs. They are a dual imperative.
Incentivize standards-based independent red-teaming, support a market for compliance and audits, and build capacity across the government to evaluate AI systems. If we want the world to trust American-made AI, we must ensure it earns that trust, at home and abroad.
Regulation that recognizes healthcare bottlenecks
Bernardo Bizzo is senior director of Mass General Brigham AI and assistant professor of radiology at Harvard Medical School.
Veasey Conway/Harvard Staff Photographer
Clinical AI regulation has been mismatched to the problems clinicians face.
To fit existing device pathways, vendors narrow AI to single conditions and rigid workflows. That can reduce perceived risk and produce narrow measures of effectiveness, but it also suppresses impact and adoption. It does not address the real bottleneck in U.S. care: efficiency under rising volumes and workforce shortages.
Foundation models can draft radiology reports, summarize charts, and orchestrate routine steps in agentic workflows. FDA has taken steps for iterative software, yet there is still no widely used pathway specific to foundation model clinical copilots that continuously learn while generating documentation across many conditions.
Elements of a deregulatory posture could help if done carefully.
America’s AI Action Plan proposes an AI evaluations ecosystem and regulatory sandboxes that enable rapid but supervised testing in real settings, including healthcare. This aligns with the Healthcare AI Challenge, a collaborative community powered by MGB AI Arena that lets experts across the country evaluate AI at scale on multisite real-world data.
With FDA participation, this approach can generate the evidence agencies and payers need and the clinical utility assessments providers are asking for.
Some pre-market requirements may ultimately lighten, though nothing has been enacted. If that occurs, more responsibility will move to developers and deploying providers. That shift is feasible only if providers have practical tools and resources for local validation and monitoring, since most are already overwhelmed.
In parallel, developers are releasing frequent and more powerful models, and while some await a regulated, workable path for clinical copilots, many are pushing experimentation into pilots or clinical research workflows, often without appropriate guardrails.
Where I would welcome more regulation is after deployment.
Require local validation before go-live, continuous post-market monitoring such as the American College of Radiology’s Assess-AI registry, and routine reporting back to FDA so regulators can see effectiveness and safety in practice, rather than relying mainly on underused medical device reporting despite known challenges with generalizability.
Healthcare AI needs policies that expand trusted, affordable compute, adopt AI monitoring and registries, enable sector testbeds at scale, and reward demonstrated efficiency that can protect patients without slowing progress.
“Passengers” brings theater and circus together on the A.R.T. stage.
Courtesy of The 7 Fingers
Madeleine Wright
A.R.T. Communications
7 min read
7 Fingers co-founders explain how they unite the two art forms in their latest A.R.T. production
The 7 Fingers, a contemporary physical-theater troupe, brings “Passengers” to the American Repertory Theater this month. In this edited conversation, Diane Paulus, the Terrie and Bradley Bloom Artistic Director of the A.R.T., speaks with 7 Fingers co-founders Gypsy Snider (circus choreographer of “Pippin”) and Shana Carroll (writer, director, and choreographer of “Passengers”) about the human scale of contemporary circus, how the art form relates to theater, and the troupe’s special relationship with Boston-area audiences.
Can you tell us a little about how your contemporary circus collective The 7 Fingers came to be?
Carroll: I started in theater and discovered circus, versus Gypsy, who started in circus and then fell in love with theater along the way. Our paths crossed through the years of touring in Europe, Canada, and the United States. And in 2001, we said, “Well, now is a good time to create our own thing together.”
Snider: That was when The 7 Fingers was born. We are seven co-founders, and the company really came into being because we were at the point when we wanted to go from being performers to becoming creators. When we started in 2002, we were looking at emerging forms that were inspiring to us, things like Blue Man Group or De La Guarda—shows that were undefinable, but deeply human and energetic and that really combined mediums.
We were a little bit rebellious and looking away from the big spectacle, the elaborate costumes, and the fantastical. We wanted to come back to something that was essentially human. Now we are a creative collective and production company that tours the world with our own shows as well as collaborative productions and projects.
Photo by Grace Gershenfeld
The 7 Fingers works at the crossroads of circus and theater. Can you tell us more about that combination of mediums?
Carroll: I think we try as much as we can to do this fusion hybrid form of circus and theater and bring all of our original passions back into the same place.
I didn’t really like circus growing up. I always thought that Ringling Brothers and the huge circuses were just so unidentifiable that I didn’t see the human being inside of it. It was so out of this world that I didn’t appreciate that there was a real human doing extraordinary things. I didn’t have any sense of emotional connection to the person doing it. Seeing it close-up was what made me see its beauty and its metaphor and its potential.
Snider: The history of circus and contemporary circus in the United States is a complex one. In so many ways, “circus” has been kind of a cursed word in the sense that it’s a form considered to be outside of society. It was accurate when the film “The Greatest Showman” came out featuring a circus sideshow considered to be lower-class yet wanting to be thought of like the opera or the ballet.
There’s always been this tension in the States around the art form. Is this street performing? Is it just base popular entertainment? Is it just about doing tricks? However, at the same time, across history and time, from early Chinese and Russian culture, Eastern, European and African cultures, circus has been founded in the idea of learning a skill and presenting or offering that skill to the audience without a fourth wall. Circus has also always hinted at narrative or the idea of creating character and storyline. In contemporary circus, which began in the late ’60s and early ’70s, story, interpretation, movement, and image-based art began to influence the form more profoundly.
With a 7 Fingers show, we are trying to create storytelling to connect emotionally with the audience, but without necessarily telling a story that has a beginning, middle, and end in the traditional theater sense. Sometimes we do that as well, but what is important to us is that we use images that are inherent in the physical intensity of the circus, the way that song and dance might move a story forward in a musical, the acrobatics must as well.
When we talk and think about how extraordinary the circus is, one of the key things for us is to tune in to how vulnerable we are within that extraordinary act. That human fragility has become the core of our story telling at The 7 Fingers. Vulnerability and humanity are what drives everything we do. “Passengers” is Shana’s baby, and it is a show that explodes with extraordinary movement in order to express authentic and absolute vulnerability.
Photo by Martine Poulin
Shana, can you tell us how “Passengers” came about?
Carroll: The little seed was planted when I was a kid growing up in Berkeley, California. We had a train that passed 10-15 miles down from us and I always remembered how we’d hear the train whistle. I remember noticing how we stopped hearing other city sounds, like the buses that were constantly passing, but we never stopped hearing the train. On the one hand, there’s something nostalgic about a train, it seems like remnants of a past era, but it’s also a promise of a future, of an unknown land. So, to me, trains have this way of talking about the past and the future all at once.
I wanted to create a piece where we have a cast of characters that all are looking for something, needing something, and are leaving for some reason. And then in a moment of suspended reality on the train, suspended time, suspended lives, they have fateful and unexpected things happen.
Right before starting the creation of “Passengers,” a very close friend passed away unexpectedly young. And it was just so tragic. I was really grieving, to the point that I thought, I have to go into rehearsals for this show, and I don’t know how I’m going to do it. I was feeling like nothing made sense. I couldn’t get out of bed one day and I said to my husband, I want the world to go back to being a place where magical things happen and not a place where young men I love die. And he said, it’s both. And when he said, it’s both, that became the thesis statement of the show. You see the two rails of the tracks as parallel tracks, and it’s joyful and celebratory, beautiful and fun, and it’s also tragic, and people die early. We travel down these two realities at once.
A.R.T. audiences are familiar with 7 Fingers’ work through Gypsy’s incredible circus chorography for our production of “Pippin.” What does it mean to you to return with “Passengers”?
Snider: I could almost cry thinking about it. “Pippin” was one of the most important creative experiences of my career. I learned so much on that production. To be able to come back with The 7 Fingers’ work to A.R.T., is really one of the biggest honors. I can’t express enough how excited we are.
“Passengers” runs through Sept. 26 at the A.R.T.’s Loeb Drama Center. For ticket information visit the website.
At any given moment, trillions of particles called neutrinos are streaming through our bodies and every material in our surroundings, without noticeable effect. Smaller than electrons and lighter than photons, these ghostly entities are the most abundant particles with mass in the universe.
The exact mass of a neutrino is a big unknown. The particle is so small, and interacts so rarely with matter, that it is incredibly difficult to measure. Scientists attempt to do so by harnessing nuclear reactors and massive particle accelerators to generate unstable atoms, which then decay into various byproducts including neutrinos. In this way, physicists can manufacture beams of neutrinos that they can probe for properties including the particle’s mass.
Now MIT physicists propose a much more compact and efficient way to generate neutrinos that could be realized in a tabletop experiment.
In a paper appearing in Physical Review Letters, the physicists introduce the concept for a “neutrino laser” — a burst of neutrinos that could be produced by laser-cooling a gas of radioactive atoms down to temperatures colder than interstellar space. At such frigid temps, the team predicts the atoms should behave as one quantum entity, and radioactively decay in sync.
The decay of radioactive atoms naturally releases neutrinos, and the physicists say that in a coherent, quantum state this decay should accelerate, along with the production of neutrinos. This quantum effect should produce an amplified beam of neutrinos, broadly similar to how photons are amplified to produce conventional laser light.
“In our concept for a neutrino laser, the neutrinos would be emitted at a much faster rate than they normally would, sort of like a laser emits photons very fast,” says study co-author Ben Jones PhD ’15, an associate professor of physics at the University of Texas at Arlington.
As an example, the team calculated that such a neutrino laser could be realized by trapping 1 million atoms of rubidium-83. Normally, the radioactive atoms have a half-life of about 82 days, meaning that half the atoms decay, shedding an equivalent number of neutrinos, every 82 days. The physicists show that, by cooling rubidium-83 to a coherent, quantum state, the atoms should undergo radioactive decay in mere minutes.
“This is a novel way to accelerate radioactive decay and the production of neutrinos, which to my knowledge, has never been done,” says co-author Joseph Formaggio, professor of physics at MIT.
The team hopes to build a small tabletop demonstration to test their idea. If it works, they envision a neutrino laser could be used as a new form of communication, by which the particles could be sent directly through the Earth to underground stations and habitats. The neutrino laser could also be an efficient source of radioisotopes, which, along with neutrinos, are byproducts of radioactive decay. Such radioisotopes could be used to enhance medical imaging and cancer diagnostics.
Coherent condensate
For every atom in the universe, there are about a billion neutrinos. A large fraction of these invisible particles may have formed in the first moments following the Big Bang, and they persist in what physicists call the “cosmic neutrino background.” Neutrinos are also produced whenever atomic nuclei fuse together or break apart, such as in the fusion reactions in the sun’s core, and in the normal decay of radioactive materials.
Several years ago, Formaggio and Jones separately considered a novel possibility: What if a natural process of neutrino production could be enhanced through quantum coherence? Initial explorations revealed fundamental roadblocks in realizing this. Years later, while discussing the properties of ultracold tritium (an unstable isotope of hydrogen that undergoes radioactive decay) they asked: Could the production of neutrinos be enhanced if radioactive atoms such as tritium could be made so cold that they could be brought into a quantum state known as a Bose-Einstein condensate?
A Bose-Einstein condensate, or BEC, is a state of matter that forms when a gas of certain particles is cooled down to near absolute zero. At this point, the particles are brought down to their lowest energy level and stop moving as individuals. In this deep freeze, the particles can start to “feel” each others’ quantum effects, and can act as one coherent entity — a unique phase that can result in exotic physics.
BECs have been realized in a number of atomic species. (One of the first instances was with sodium atoms, by MIT’s Wolfgang Ketterle, who shared the 2001 Nobel Prize in Physics for the result.) However, no one has made a BEC from radioactive atoms. To do so would be exceptionally challenging, as most radioisotopes have short half-lives and would decay entirely before they could be sufficiently cooled to form a BEC.
Nevertheless, Formaggio wondered, if radioactive atoms could be made into a BEC, would this enhance the production of neutrinos in some way? In trying to work out the quantum mechanical calculations, he found initially that no such effect was likely.
“It turned out to be a red herring — we can’t accelerate the process of radioactive decay, and neutrino production, just by making a Bose-Einstein condensate,” Formaggio says.
In sync with optics
Several years later, Jones revisited the idea, with an added ingredient: superradiance — a phenomenon of quantum optics that occurs when a collection of light-emitting atoms is stimulated to behave in sync. In this coherent phase, it’s predicted that the atoms should emit a burst of photons that is “superradiant,” or more radiant than when the atoms are normally out of sync.
Jones proposed to Formaggio that perhaps a similar superradiant effect is possible in a radioactive Bose-Einstein condensate, which could then result in a similar burst of neutrinos. The physicists went to the drawing board to work out the equations of quantum mechanics governing how light-emitting atoms morph from a coherent starting state into a superradiant state. They used the same equations to work out what radioactive atoms in a coherent BEC state would do.
“The outcome is: You get a lot more photons more quickly, and when you apply the same rules to something that gives you neutrinos, it will give you a whole bunch more neutrinos more quickly,” Formaggio explains. “That’s when the pieces clicked together, that superradiance in a radioactive condensate could enable this accelerated, laser-like neutrino emission.”
To test their concept in theory, the team calculated how neutrinos would be produced from a cloud of 1 million super-cooled rubidium-83 atoms. They found that, in the coherent BEC state, the atoms radioactively decayed at an accelerating rate, releasing a laser-like beam of neutrinos within minutes.
Now that the physicists have shown in theory that a neutrino laser is possible, they plan to test the idea with a small tabletop setup.
“It should be enough to take this radioactive material, vaporize it, trap it with lasers, cool it down, and then turn it into a Bose-Einstein condensate,” Jones says. “Then it should start doing this superradiance spontaneously.”
The pair acknowledge that such an experiment will require a number of precautions and careful manipulation.
“If it turns out that we can show it in the lab, then people can think about: Can we use this as a neutrino detector? Or a new form of communication?” Formaggio says. “That’s when the fun really starts.”
“This is a novel way to accelerate radioactive decay and the production of neutrinos, which to my knowledge, has never been done,” says Joseph Formaggio.
In the search for habitable exoplanets, atmospheric conditions play a key role in determining if a planet can sustain liquid water. Suitable candidates often sit in the “Goldilocks zone,” a distance that is neither too close nor too far from their host star to allow liquid water. With the launch of the James Webb Space Telescope (JWST), astronomers are collecting improved observations of exoplanet atmospheres that will help determine which exoplanets are good candidates for further study.
In an open-access paper published today in The Astrophysical Journal Letters, astronomers used JWST to take a closer look at the atmosphere of the exoplanet TRAPPIST-1e, located in the TRAPPIST-1 system. While they haven’t found definitive proof of what it is made of — or if it even has an atmosphere — they were able to rule out several possibilities.
“The idea is: If we assume that the planet is not airless, can we constrain different atmospheric scenarios? Do those scenarios still allow for liquid water at the surface?” says Ana Glidden, a postdoc in the MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS) and the MIT Kavli Institute for Astrophysics and Space Research, and the first author on the paper. The answers they found were yes.
The new data rule out a hydrogen-dominated atmosphere, and place tighter constraints on other atmospheric conditions that are commonly created through secondary-generation, such as volcanic eruptions and outgassing from the planet’s interior. The data were consistent enough to still allow for the possibility of a surface ocean.
“TRAPPIST-1e remains one of our most compelling habitable-zone planets, and these new results take us a step closer to knowing what kind of world it is,” says Sara Seager, Class of 1941 Professor of Planetary Science at MIT and co-author on the study. “The evidence pointing away from Venus- and Mars-like atmospheres sharpens our focus on the scenarios still in play.”
The study’s co-authors also include collaborators from the University of Arizona, Johns Hopkins University, University of Michigan, the Space Telescope Science Institute, and members of the JWST-TST DREAMS Team.
Improved observations
Exoplanet atmospheres are studied using a technique called transmission spectroscopy. When a planet passes in front of its host star, the starlight is filtered through the planet’s atmosphere. Astronomers can determine which molecules are present in the atmosphere by seeing how the light changes at different wavelengths.
“Each molecule has a spectral fingerprint. You can compare your observations with those fingerprints to suss out which molecules may be present,” says Glidden.
JWST has a larger wavelength coverage and higher spectral resolution than its predecessor, the Hubble Space Telescope, which makes it possible to observe molecules like carbon dioxide and methane that are more commonly found in our own solar system. However, the improved observations have also highlighted the problem of stellar contamination, where changes in the host star’s temperature due to things like sunspots and solar flares make it difficult to interpret data.
“Stellar activity strongly interferes with the planetary interpretation of the data because we can only observe a potential atmosphere through starlight,” says Glidden. “It is challenging to separate out which signals come from the star versus from the planet itself.”
Ruling out atmospheric conditions
The researchers used a novel approach to mitigate for stellar activity and, as a result, “any signal you can see varying visit-to-visit is most likely from the star, while anything that’s consistent between the visits is most likely the planet,” says Glidden.
The researchers were then able to compare the results to several different possible atmospheric scenarios. They found that carbon dioxide-rich atmospheres, like those of Mars and Venus, are unlikely, while a warm, nitrogen-rich atmosphere similar to Saturn’s moon Titan remains possible. The evidence, however, is too weak to determine if any atmosphere was present, let alone detecting a specific type of gas. Additional, ongoing observations that are already in the works will help to narrow down the possibilities.
“With our initial observations, we have showcased the gains made with JWST. Our follow-up program will help us to further refine our understanding of one of our best habitable-zone planets,” says Glidden.
New research using the James Webb Telescope rules out possible atmospheric conditions of the exoplanet TRAPPIST-1e, depicted at the lower right as it transits in front of its host star. While it is still possible for the planet to have an atmosphere, it is unlikely to be a thick, hydrogen-rich one.
Viewscore.io can simulate and score occupants' satisfaction with window views, helping designers optimize buildings' facades, floor plans and energy efficiency.
The exterior railings of the 125 year-old Schatzalp Hotel in Davos were in poor condition. Researchers at ETH Zurich teamed up with a local timber construction company and an ETH spin-off to find a replacement, using state-of-the-art technology.
In the search for habitable exoplanets, atmospheric conditions play a key role in determining if a planet can sustain liquid water. Suitable candidates often sit in the “Goldilocks zone,” a distance that is neither too close nor too far from their host star to allow liquid water. With the launch of the James Webb Space Telescope (JWST), astronomers are collecting improved observations of exoplanet atmospheres that will help determine which exoplanets are good candidates for further study.
In an open-access paper published today in The Astrophysical Journal Letters, astronomers used JWST to take a closer look at the atmosphere of the exoplanet TRAPPIST-1e, located in the TRAPPIST-1 system. While they haven’t found definitive proof of what it is made of — or if it even has an atmosphere — they were able to rule out several possibilities.
“The idea is: If we assume that the planet is not airless, can we constrain different atmospheric scenarios? Do those scenarios still allow for liquid water at the surface?” says Ana Glidden, a postdoc in the MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS) and the MIT Kavli Institute for Astrophysics and Space Research, and the first author on the paper. The answers they found were yes.
The new data rule out a hydrogen-dominated atmosphere, and place tighter constraints on other atmospheric conditions that are commonly created through secondary-generation, such as volcanic eruptions and outgassing from the planet’s interior. The data were consistent enough to still allow for the possibility of a surface ocean.
“TRAPPIST-1e remains one of our most compelling habitable-zone planets, and these new results take us a step closer to knowing what kind of world it is,” says Sara Seager, Class of 1941 Professor of Planetary Science at MIT and co-author on the study. “The evidence pointing away from Venus- and Mars-like atmospheres sharpens our focus on the scenarios still in play.”
The study’s co-authors also include collaborators from the University of Arizona, Johns Hopkins University, University of Michigan, the Space Telescope Science Institute, and members of the JWST-TST DREAMS Team.
Improved observations
Exoplanet atmospheres are studied using a technique called transmission spectroscopy. When a planet passes in front of its host star, the starlight is filtered through the planet’s atmosphere. Astronomers can determine which molecules are present in the atmosphere by seeing how the light changes at different wavelengths.
“Each molecule has a spectral fingerprint. You can compare your observations with those fingerprints to suss out which molecules may be present,” says Glidden.
JWST has a larger wavelength coverage and higher spectral resolution than its predecessor, the Hubble Space Telescope, which makes it possible to observe molecules like carbon dioxide and methane that are more commonly found in our own solar system. However, the improved observations have also highlighted the problem of stellar contamination, where changes in the host star’s temperature due to things like sunspots and solar flares make it difficult to interpret data.
“Stellar activity strongly interferes with the planetary interpretation of the data because we can only observe a potential atmosphere through starlight,” says Glidden. “It is challenging to separate out which signals come from the star versus from the planet itself.”
Ruling out atmospheric conditions
The researchers used a novel approach to mitigate for stellar activity and, as a result, “any signal you can see varying visit-to-visit is most likely from the star, while anything that’s consistent between the visits is most likely the planet,” says Glidden.
The researchers were then able to compare the results to several different possible atmospheric scenarios. They found that carbon dioxide-rich atmospheres, like those of Mars and Venus, are unlikely, while a warm, nitrogen-rich atmosphere similar to Saturn’s moon Titan remains possible. The evidence, however, is too weak to determine if any atmosphere was present, let alone detecting a specific type of gas. Additional, ongoing observations that are already in the works will help to narrow down the possibilities.
“With our initial observations, we have showcased the gains made with JWST. Our follow-up program will help us to further refine our understanding of one of our best habitable-zone planets,” says Glidden.
New research using the James Webb Telescope rules out possible atmospheric conditions of the exoplanet TRAPPIST-1e, depicted at the lower right as it transits in front of its host star. While it is still possible for the planet to have an atmosphere, it is unlikely to be a thick, hydrogen-rich one.
Game-based training improves not only the cognitive abilities of people with initial signs of developing dementia, but also leads to positive changes in the brain. That is according to two new studies by researchers from ETH Zurich and Eastern Switzerland University of Applied Sciences OST.
Applying to NUS has become easier — the University has adopted a new digital system that offers a smoother, faster, and more intuitive experience for students.
A few years ago, NUS set out to reimagine and enhance the entire admissions journey, aiming to deliver a seamless and user-friendly experience for both applicants and administrators. This initiative to modernise the NUS Undergraduate Admission System (UAS 2) was recently featured as a case study by Gartner, a leading global research and advisory firm. Selected through a rigorous selection process and a thorough independent assessment, this recognition highlights the significant impact of NUS’ digital transformation efforts.
Putting automation at the core of the initiative, NUS Office of Admissions (NUS OAM) and NUS Information Technology (NUS IT) teamed up to transform the application experience, simplifying and optimising the entire process.
The new and modernised system was first launched in Academic Year 2022/23 and fully completed in June 2024. The introduction of the new system significantly enhanced the application process and enabled NUS OAM to efficiently handle a substantial increase in applications. The system also provided timely notifications of application outcomes — via email and SMS — to prospective students, reducing the volume of admissions enquiries to NUS OAM.
NUS Dean of Admissions Professor Goh Say Song said, “The application process marks students’ first pivotal step in their journey towards admission. When the process is simple and smooth, it instantly builds trust with the students and shows that we value their experience from the very beginning.”
This new platform enables automation and data-driven processes, replacing manual systems with a digitally enhanced approach to managing applications. This has optimised the use of IT resources and enabled more responsive and scalable operations. Extensive testing was also conducted by NUS OAM and NUS IT to ensure an integration of legacy systems with the new platform.
Transforming the application experience with automation
Automating the application process allowed NUS OAM to streamline its workflows, significantly reducing the time taken for administrators to process applications. Beyond enhancing operational productivity, the new platform offers applicants an intuitive, accessible and more personalised experience, setting NUS apart in an intensely competitive higher education landscape.
The modernisation of the admission system also had a tangible impact on the day-to-day work of administrators. With faster processing, more efficient use of IT resources, and higher applicant satisfaction, this reflects a broader cultural shift towards digital-first thinking and underscores the long-term value of the benefits of transformative technology.
NUS Chief Information Technology Officer Ms Tan Shui-Min shared, “Transforming an institution-wide system was fostered by strong cross-department collaboration that enabled us to overcome challenges and drive the platform’s success. The recognition as a Gartner case study reaffirmed NUS’s position as a digital leader in higher education and validated the efforts of our teams!”
Both teams have some nuggets of wisdom to share from the experience of facilitating this system upgrade. “Remain flexible, maintain an agile mindset when adopting new technology, and regularly engage and proactively communicate with end-users to determine adjustments that need to be made. The success of such initiatives lies in having a clear vision, a commitment to innovation, and a willingness to reimagine traditional processes for a better future.”
Artificial intelligence optimization offers a host of benefits for mechanical engineers, including faster and more accurate designs and simulations, improved efficiency, reduced development costs through process automation, and enhanced predictive maintenance and quality control.
“When people think about mechanical engineering, they're thinking about basic mechanical tools like hammers and … hardware like cars, robots, cranes, but mechanical engineering is very broad,” says Faez Ahmed, the Doherty Chair in Ocean Utilization and associate professor of mechanical engineering at MIT. “Within mechanical engineering, machine learning, AI, and optimization are playing a big role.”
In Ahmed’s course, 2.155/156 (AI and Machine Learning for Engineering Design), students use tools and techniques from artificial intelligence and machine learning for mechanical engineering design, focusing on the creation of new products and addressing engineering design challenges.
“There’s a lot of reason for mechanical engineers to think about machine learning and AI to essentially expedite the design process,” says Lyle Regenwetter, a teaching assistant for the course and a PhD candidate in Ahmed’s Design Computation and Digital Engineering Lab (DeCoDE), where research focuses on developing new machine learning and optimization methods to study complex engineering design problems.
First offered in 2021, the class has quickly become one of the Department of Mechanical Engineering (MechE)’s most popular non-core offerings, attracting students from departments across the Institute, including mechanical and civil and environmental engineering, aeronautics and astronautics, the MIT Sloan School of Management, and nuclear and computer science, along with cross-registered students from Harvard University and other schools.
The course, which is open to both undergraduate and graduate students, focuses on the implementation of advanced machine learning and optimization strategies in the context of real-world mechanical design problems. From designing bike frames to city grids, students participate in contests related to AI for physical systems and tackle optimization challenges in a class environment fueled by friendly competition.
Students are given challenge problems and starter code that “gave a solution, but [not] the best solution …” explains Ilan Moyer, a graduate student in MechE. “Our task was to [determine], how can we do better?” Live leaderboards encourage students to continually refine their methods.
Em Lauber, a system design and management graduate student, says the process gave space to explore the application of what students were learning and the practice skill of “literally how to code it.”
The curriculum incorporates discussions on research papers, and students also pursue hands-on exercises in machine learning tailored to specific engineering issues including robotics, aircraft, structures, and metamaterials. For their final project, students work together on a team project that employs AI techniques for design on a complex problem of their choice.
“It is wonderful to see the diverse breadth and high quality of class projects,” says Ahmed. “Student projects from this course often lead to research publications, and have even led to awards.” He cites the example of a recent paper, titled “GenCAD-Self-Repairing,” that went on to win the American Society of Mechanical Engineers Systems Engineering, Information and Knowledge Management 2025 Best Paper Award.
“The best part about the final project was that it gave every student the opportunity to apply what they’ve learned in the class to an area that interests them a lot,” says Malia Smith, a graduate student in MechE. Her project chose “markered motion captured data” and looked at predicting ground force for runners, an effort she called “really gratifying” because it worked so much better than expected.
Lauber took the framework of a “cat tree” design with different modules of poles, platforms, and ramps to create customized solutions for individual cat households, while Moyer created software that is designing a new type of 3D printer architecture.
“When you see machine learning in popular culture, it’s very abstracted, and you have the sense that there’s something very complicated going on,” says Moyer. “This class has opened the curtains.”
MIT mechanical engineering graduate student Malia Smith (left) asks teaching assistant Noah Bagazinski for feedback on her final project for course 2.155/156 (AI and Machine Learning for Engineering Design).
Climate change presents substantial challenges for Singapore and Southeast Asia. Recognising the importance of addressing these needs, NUS is contributing to a new national-level collaboration aimed at enhancing weather prediction for Singapore and the region by leveraging the latest advances in science and technology.
On 5 September 2025, Professor Liu Bin, NUS Deputy President (Research and Technology), signed a Memorandum of Understanding with partners from the National Environment Agency (NEA), Agency for Science, Technology and Research (A*STAR), and Nanyang Technological University, Singapore (NTU Singapore) to establish the Climate and Weather Research Alliance Singapore (CAWRAS).
CAWRAS serves as a national research platform to advance tropical climate and weather research for Singapore and Southeast Asia. It will also nurture local talent pipeline in weather and climate science. Senior Minister of State for Sustainability and the Environment Janil Puthucheary attended the launch as Guest-of-Honour.
Highlighting the significance of CAWRAS’ work, Dr Puthucheary said, “Climate science gives us a better glimpse into the future and helps reduce the uncertainty in climate projections, allowing us to plan and calibrate our various adaptation measures based on the latest available science, which we have done in areas such as coastal protection, flood resilience, heat resilience and food security. Weather services are crucial in providing the necessary data to government agencies and stakeholders for their operations.”
He added that there is a need to further understand tropical climate and weather systems, and develop localised, high-resolution products tailored to this region, which has our own uniqueness.
“The research alliance will bring together and harmonise the unique capabilities across our institutes,” said Dr Puthucheary. “This collaborative research model will also allow us to nurture a robust local talent pipeline in weather and climate science… We need to have that capability and expertise so that we position Singapore at the forefront of tropical urban weather and climate science.”
Five NUS projects have been awarded funding, among 10 projects, under a S$25 million Weather Science Research Programme funded under the Research, Innovation and Enterprise 2025 plan which will be implemented via CAWRAS.
NUS researchers will be spearheading the following research initiatives:
Dr Srivatsan V Raghavan from the NUS Tropical Marine Science Institute, will be leading a project team to enhance Singapore’s weather science capabilities by leveraging high-resolution radar data to improve thunderstorm prediction within the critical 0-6 hour range and reduce false alarms for heavy rainfall events.
2. Modelling Singapore's complex urban environment and its effects on weather, including extreme conditions
A team of scientists led by Professor Matthias Roth from the Department of Geography at NUS Faculty of Arts and Social Sciences will look into developing a next-generation urban-scale weather forecasting system to improve prediction capabilities from the current 1.5km resolution used by the Meteorological Services Singapore to much finer neighbourhood scales (100-300m). The enhanced system will provide more detailed forecasts for urban heat, wind flows, extreme rainfall, and air pollution dispersion.
3. Understanding the effects of air-sea-land interactions on the weather of the Maritime Continent
Dr Kaushik Sasmal from the Technology Centre for Offshore and Marine, Singapore, will drive research efforts to develop a fully coupled atmosphere-ocean-land modeling system to improve weather prediction in the Maritime Continent region, particularly for phenomena strongly influenced by air-sea-land interactions such as squall lines, atmospheric and marine vortices, and the diurnal cycle of rainfall.
4. AI foundation models for regional weather prediction in the Maritime Continent
Assistant Professor Zhu Lailai from the Department of Mechanical Engineering under the College of Design and Engineering at NUS will be leading a project to establish a general framework for fine-tuning existing AI Foundation Models tailored to high-resolution regional weather prediction in the Maritime Continent. The aim is to strengthen extreme weather detection, with aviation identified as a priority application.
5. Leveraging advanced techniques to transform complex ensemble data into actionable tropical weather forecasts
A team led by Assistant Professor He Xiaogang from the Department of Civil and Environmental Engineering under the College of Design and Engineering at NUS will be developing the Tropical Ensemble Model Post-processing with Explainable Scenario (TEMPEST) system to interpret ensemble forecasts through novel clustering algorithms.
Prof Liu said, “NUS welcomes this national research alliance as an integral part of our commitment to research and innovation in the areas of sustainability and climate change. Leveraging our research strengths such as urban climate modelling, hydroclimatology, artificial intelligence, and foundation modelling, we are excited to contribute significantly on a national level to Singapore's weather prediction capabilities while nurturing the next generation of weather and climate scientists.”
Elevating weather science capabilities, together
Led by the Centre for Climate Research Singapore, CAWRAS brings together leading research institutions to expand weather science capabilities at the national level. This coordinated effort comes at a time when advances in technology, such as high-resolution modelling, artificial intelligence, and enhanced observational networks, present new opportunities to improve weather prediction. The research alliance will expand its scope to include climate research on longer timescales in future.
The 10 research projects awarded under the WSRP, focus on four key areas: improving the use of weather observations, developing next-generation weather/climate models, performing a detailed historical weather re-analysis over recent decades for Southeast Asia, and enhancing weather prediction accuracy through advanced post-processing techniques.
Ms Koh Li-Na, Director-General of the Meteorological Service Singapore, NEA, said, "CAWRAS is a strong commitment by our research institutions, working with the Centre for Climate Research Singapore, to collectively tackle the unique challenges of predicting weather in our tropical urban environment and enhance our understanding of climate change. We look forward to translating science to improved services to bolster Singapore’s resilience in the face of climate change.”
Professor Lim Keng Hui, Assistant Chief Executive (Science & Engineering Research Council), A*STAR, said, "A*STAR is proud to contribute to this national effort to improve Singapore’s weather research. Our expertise in high performance computing, artificial intelligence (AI), modelling and simulation will contribute to the development of the Climate and Weather Research & Evaluation Testbed (CAWRET) and support regional analysis. We look forward to working closely with our partners to translate scientific innovations into practical solutions that strengthen Singapore’s resilience to weather-related challenges, particularly in sectors in aviation, maritime, and urban planning.”
Professor Ernst Kuipers, Vice President (Research) of NTU Singapore said, “Leveraging NTU’s established track record in Earth and environmental sciences, supported by infrastructure like the Earth Observatory of Singapore, and our pioneering Climate Transformation Programme, we are uniquely positioned to combine AI, remote sensing, and advanced environmental modelling to forecast tropical weather with enhanced accuracy. Through interdisciplinary collaboration spanning fields like medicine, public health, environmental engineering, and urban resilience, NTU will contribute to Singapore’s role as a leading hub for tropical weather and climate science research in Southeast Asia.”
When we feel socially isolated, our brain motivates us to seek rewards. Current theory holds that this is a beneficial evolutionary adaptation to help us reconnect with others.
The University of Cambridge-led study found that people in their late teens are very sensitive to the experience of loneliness. After just a few hours without any social interaction, adolescents make significantly more effort to get rewards.
This increased motivation to seek rewards can help with social reconnection. But when connecting with others is not possible, the behaviour change might be problematic – for example, by making some people more prone to seek out rewards such as alcohol or recreational drugs.
The study found that the effect was stronger in adolescents who reported feeling lonelier while in isolation. When study participants were allowed to interact with others on social media during isolation, they reported feeling less lonely – and their reward-seeking behaviour changed less dramatically as a result.
“Our study demonstrates just how sensitive young people are to very short periods of isolation,” said Dr Livia Tomova, first author of the report, who conducted the study while in the Department of Psychology at the University of Cambridge.
“We found that loneliness significantly increases adolescents’ motivation to seek out rewards – whether that’s more social contact, money, or something else,” added Tomova, who is now based at the University of Cardiff.
Studies suggest that adolescent loneliness has doubled worldwide over the past decade. Social media has been suggested as the culprit, but the researchers say many other changes in society could also be to blame.
“Social media can lead to loneliness in some adolescents, but our study suggests that this relationship is complex,” said Professor Sarah-Jayne Blakemore in the University of Cambridge’s Department of Psychology, senior author of the report.
She added: “Virtual interaction with others seems to make isolated teens less driven to seek external rewards, compared to when they are isolated without access to social media. That suggests social media might reduce some of the negative effects of isolation – but of course we don’t know what potentially harmful effects it might have at the same time.”
While study participants got less bored and lonely in isolation if they had access to social media, they still experienced the same decrease in positive mood as those without access.
Social interaction is a basic human need, and lack of it leads to loneliness. Until now there has been very limited understanding of how loneliness affects adolescent behaviour, with most scientific experiments carried out in animal models.
How was the study done?
Researchers recruited young people from the local area in Cambridge, conducting extensive screening to gather a group of 40 adolescents aged 16-19 who had good social connections, no history of mental health problems, and average levels of loneliness for their age group.
Participants were given initial tests to establish their baseline score for each task. Then on two different days, they were asked to spend between three and four hours alone in a room before completing the same computer-based tasks again.
On one of the isolation days participants had no social interaction at all, but on the other they had access to virtual social interactions through their phone or laptop.
The study found that when virtual interactions were available, almost half the participants spent over half their time online – predominantly using Snapchat, Instagram and WhatsApp to message their friends.
Overall, the study found that participants became more motivated to look at images of positive social interactions, and to play games where they could win money, after being in isolation for around four hours. They were also better at learning how to get these rewards in ‘fruit machine’-type games.
If they could interact virtually with others while in isolation, they reported feeling less lonely. They were also less inclined to make an effort in the tasks than when they didn’t have virtual social interaction during their isolation.
This research was funded by a Henslow Research Fellowship from the Cambridge Philosophical Society, Wellcome, Jacobs Foundation, and Cambridge Biomedical Research Centre.
A study has found that adolescents become highly motivated to seek rewards after just a few hours of social isolation. This may be beneficial in driving them towards social interaction, but when opportunities for connection are limited could lead them to pursue less healthy rewards like alcohol or drugs.
Our study demonstrates just how sensitive young people are to very short periods of isolation
The Asian Young Scientist Fellowship (AYSF) announced 12 Fellows this year comprising accomplished early-career scientists selected for their exceptional scientific contributions and potential in their respective fields.
The AYSF annually selects early-career researchers in the fundamental science disciplines of life sciences, physical sciences, mathematics and computer science. Each Fellow receives USD100,000 over two years to support their research, along with benefits from participating in the AYSF annual conference, academic activities, and access to a network of young scientists across Asia and globally.
Assoc Prof Koh said, “It’s an honour to be named an AYS Fellow and represent NUS on the international stage. I believe this Fellowship will boost our efforts to develop transformative skeletal editing reactions that have the capacity to change the landscape of organic synthesis.”
‘Surprise and relief’ from homeless patients: ‘This works for me’
Doctors report ‘fascinating and counterintuitive’ results delivering healthcare to hard-to-reach population via telehealth
Alvin Powell
Harvard Staff Writer
8 min read
Katherine Koh.
Niles Singer/Harvard Staff Photographer
Cellphones are everywhere, including in the hands of homeless people, a population among America’s sickest — average life expectancy is just 51 — and among the hardest to reach by healthcare workers.
It’s also a population not often associated with technology, which comes with utility bills, internet service, and cellphone provider plans. But a pandemic era innovation — telehealth for homeless people — still offers a way for today’s providers to reach homeless patients more frequently and reliably than traditional office visits.
In this edited conversation with the Gazette, Koh said she’s seen the effect in her own practice, and she and co-authors from MGH, BHCHP, Boston Medical Center, and Brown University want to continue research on this approach and highlight its success for others assisting this hard-to-reach population.
How did your work in this area get started?
Like most healthcare organizations, BHCHP pivoted to telehealth during the pandemic in 2020. Initially, a lot of people — including myself — assumed it wouldn’t work well. But the patient-missed-appointment rate for my telehealth clinic days was lower than in-person days, which I found fascinating and counterintuitive.
“A lot of people are struggling with substance use, mental health symptoms, and executive functioning while trying to meet basic needs of food, water, and clothing, making it hard for people to come to a clinic appointment in a timely manner.”
Telehealth for unhoused patients may sound like an oxymoron, but upon reflection it actually makes sense. A lot of people are struggling with substance use, mental health symptoms, and executive functioning while trying to meet basic needs of food, water, and clothing, making it hard for people to come to a clinic appointment in a timely manner. They often need a T pass or a way to pay for the T. Telehealth creates an easier way for people to access care without having to contend with these barriers.
While there is a high missed-appointment rate when patients are expected to come in person, I found that with telehealth, about 80 percent of people would pick up their phones. For example, I talked to one patient every single week for about six months straight, whereas in person, he would come once a month at best. This article was born out of those clinical experiences.
We did a preliminary analysis and found that during the first six months of the pandemic, 76 percent of behavioral health visits were done via telehealth, and 26 percent of medical visits. Arguably more fascinating is that telehealth is still being used now, when it doesn’t have to be. It’s still a common practice in BHCHP, and I think it should be optimized as a creative way to reach this population that’s often hard to reach.
Is it used more for particular conditions?
Behavioral health visits particularly lend themselves to telehealth, as a physical exam is not needed in many cases. We’ve also heard from our physician colleagues that it’s working especially well for diabetes follow-up visits, COVID-19 and other infectious diseases, and suboxone visits for opioid use disorder. It doesn’t work for every condition or patient, but we found that 50 percent of behavioral health patients have had one telehealth visit in the past year.
During telehealth visits, do you feel that you’re connecting with patients as much as you would in person?
Generally, yes, but I find there are nuances in how telehealth works for different patients. Most people can easily do an audio visit because it’s literally just picking up the phone. Video visits can be more challenging for patients because they require a stable internet connection and more tech literacy. However, another way we have made telehealth work is having a staff member set up an iPad to visit with a remote provider while patients are in our respite center or clinic, which has facilitated the use of video technology.
If it’s my first time meeting a new patient, audio visits are harder for establishing a connection than in person or via video. But if I know the patient, oftentimes an audio visit works very well to meet their needs. I often find patients expressing surprise and relief about the televisits, saying things like “This works for me.” I think it empowers them and encourages them to continue engaging in medical care.
“I think it empowers them and encourages them to continue engaging in medical care.”
What are the biggest healthcare issues facing homeless people today?
There are many. Few populations bear a greater psychiatric burden, which is what I’m focused on as a psychiatrist. When people think about homelessness and mental illness, they often think about those suffering schizophrenia, who are disconnected from reality. That is certainly prevalent, but much of what I treat is the sequelae of trauma. Many of these individuals have been through unimaginable trauma from an early age, which affects their ability to trust, to regulate emotions, to tolerate stressful situations. Substance use disorders are very prevalent: opioid use disorder and alcohol use disorder are the most common. Then there’s a whole range of acute and chronic medical diseases like cardiovascular disease, respiratory illness, and cancers. These conditions contribute to an abominable mortality rate and mean age of death that’s almost 30 years lower than the general population.
Are measures of health outcomes associated with this research?
Telehealth appears to increase engagement and access — typically two major challenges — for the unhoused population, but we don’t know yet whether this translates into fewer emergency and inpatient stays. That is a really important area of research.
This research also encourages the use of technology to address homelessness more broadly. For example, I’m working on a study using AI to measure outcomes for our patients, which I think is an exciting new area. Historically, homeless individuals have been excluded from advances in technology, but I think the telehealth example shows that technology can and should be used to creatively advance research and care for unhoused people.
“Historically, homeless individuals have been excluded from advances in technology, but I think the telehealth example shows that technology can and should be used to creatively advance research and care for unhoused people.”
Homelessness is at a record high in America. What factors are contributing to this?
Increasing housing costs, along with stagnant wages. An increase in migrants without adequate systems to address their influx, and the end of pandemic-era renter protections. Natural disasters are also an underappreciated reason for increasing homelessness. For example, in the 2024 count, people displaced as a result of the Maui wildfires were cited as contributing to the rise in homelessness.
A couple of years ago, the Mass and Cass homeless encampment — at Massachusetts Avenue and Melnea Cass Boulevard in Boston — was broken up, with homeless people sent to shelters, temporary housing, and some moving to other parts of the city. There has been media coverage that people have come back to that area. What’s your assessment of the situation there?
Tackling this crisis also has to be about prevention, because even if we were to house everybody who’s homeless today, we won’t solve the problem. There’s a whole pipeline of people falling into homelessness and not enough affordable housing and places to send them. My understanding is that the current Mass and Cass population is not just people returning, but also new people coming in. The centralization of services in that area — there are treatment programs, BHCHP, and shelters there — make it a natural place to congregate.
Addressing homelessness, which has many roots, in a large city, within budget constraints, is very complicated. Every major city is challenged by encampments and a lack of funding or support for sustainable solutions. At the patient level, I do think it’s critical to recognize that many unhoused people are there because of decades of trauma, adversity, and hardship. Their wounds can’t be healed within weeks or even months. Therefore, finding ways to fund temporary housing sites longer-term until people can move into permanent supportive housing, as well as increase the supply of affordable housing and focus on prevention for people at high risk, is key to solving this crisis.
The first comprehensive map of mouse brain activity has been unveiled by a large international collaboration of neuroscientists.
Researchers from the International Brain Laboratory (IBL), including MIT neuroscientist Ila Fiete, published their open-access findings today in twopapers in Nature, revealing insights into how decision-making unfolds across the entire brain in mice at single-cell resolution. This brain-wide activity map challenges the traditional hierarchical view of information processing in the brain and shows that decision-making is distributed across many regions in a highly coordinated way.
“This is the first time anyone has produced a full, brain-wide map of the activity of single neurons during decision-making,” explains co-founder of IBL Alexandre Pouget. “The scale is unprecedented as we recorded from over half-a-million neurons across mice in 12 labs, covering 279 brain areas, which together represent 95 percent of the mouse brain volume. The decision-making activity, and particularly reward, lit up the brain like a Christmas tree,” adds Pouget, who is also a group leader at the University of Geneva in Switzerland.
Modeling decision-making
The brain map was made possible by a major international collaboration of neuroscientists from multiple universities, including MIT. Researchers across 12 labs used state-of-the-art silicon electrodes, called neuropixels probes, for simultaneous neural recordings to measure brain activity while mice were carrying out a decision-making task.
“Participating in the International Brain Laboratory has added new ways for our group to contribute to science,” says Fiete, who is also a professor of brain and cognitive sciences, an associate investigator at the McGovern Institute for Brain Research, and director of the K. Lisa Yang ICoN Center at MIT. “Our lab has helped standardize methods to analyze and generate robust conclusions from data. As computational neuroscientists interested in building models of how the brain works, access to brain-wide recordings is incredible: the traditional approach of recording from one or a few brain areas limited our ability to build and test theories, resulting in fragmented models. Now, we have the delightful but formidable task to make sense of how all parts of the brain coordinate to perform a behavior. Surprisingly, having a full view of the brain leads to simplifications in the models of decision-making,” says Fiete.
The labs collected data from mice performing a decision-making task with sensory, motor, and cognitive components. In the task, a mouse sits in front of a screen and a light appears on the left or right side. If the mouse then responds by moving a small wheel in the correct direction, it receives a reward.
In some trials, the light is so faint that the animal must guess which way to turn the wheel, for which it can use prior knowledge: the light tends to appear more frequently on one side for a number of trials, before the high-frequency side switches. Well-trained mice learn to use this information to help them make correct guesses. These challenging trials therefore allowed the researchers to study how prior expectations influence perception and decision-making.
Brain-wide results
The first paper, “A brain-wide map of neural activity during complex behaviour,” showed that decision-making signals are surprisingly distributed across the brain, not localized to specific regions. This adds brain-wide evidence to a growing number of studies that challenge the traditional hierarchical model of brain function, and emphasizes that there is constant communication across brain areas during decision-making, movement onset, and even reward. This means that neuroscientists will need to take a more holistic, brain-wide approach when studying complex behaviors in the future.
“The unprecedented breadth of our recordings pulls back the curtain on how the entire brain performs the whole arc of sensory processing, cognitive decision-making, and movement generation,” says Fiete. “Structuring a collaboration that collects a large standardized dataset which single labs could not assemble is a revolutionary new direction for systems neuroscience, initiating the field into the hyper-collaborative mode that has contributed to leaps forward in particle physics and human genetics. Beyond our own conclusions, the dataset and associated technologies, which were released much earlier as part of the IBL mission, have already become a massively used resource for the entire neuroscience community.”
The second paper, “Brain-wide representations of prior information,” showed that prior expectations — our beliefs about what is likely to happen based on our recent experience — are encoded throughout the brain. Surprisingly, these expectations are not only found in cognitive areas, but also brain areas that process sensory information and control actions. For example, expectations are even encoded in early sensory areas such as the thalamus, the brain’s first relay for visual input from the eye. This supports the view that the brain acts as a prediction machine, but with expectations encoded across multiple brain structures playing a central role in guiding behavior responses. These findings could have implications for understanding conditions such as schizophrenia and autism, which are thought to be caused by differences in the way expectations are updated in the brain.
“Much remains to be unpacked: If it is possible to find a signal in a brain area, does it mean that this area is generating the signal, or simply reflecting a signal generated somewhere else? How strongly is our perception of the world shaped by our expectations? Now we can generate some quantitative answers and begin the next phase experiments to learn about the origins of the expectation signals by intervening to modulate their activity,” says Fiete.
Looking ahead, the team at IBL plan to expand beyond their initial focus on decision-making to explore a broader range of neuroscience questions. With renewed funding in hand, IBL aims to expand its research scope and continue to support large-scale, standardized experiments.
New model of collaborative neuroscience
Officially launched in 2017, IBL introduced a new model of collaboration in neuroscience that uses a standardized set of tools and data processing pipelines shared across multiple labs, enabling the collection of massive datasets while ensuring data alignment and reproducibility. This approach to democratize and accelerate science draws inspiration from large-scale collaborations in physics and biology, such as CERN and the Human Genome Project.
All data from these studies, along with detailed specifications of the tools and protocols used for data collection, are openly accessible to the global scientific community for further analysis and research. Summaries of these resources can be viewed and downloaded on the IBL website under the sections: Data, Tools, Protocols.
This research was supported by grants from Wellcome, the Simons Foundation, the National Institutes of Health, the National Science Foundation, the Gatsby Charitable Foundation, and by the Max Planck Society and the Humboldt Foundation.
This brain-wide map shows 75,000 analyzed neurons. Each dot is linearly scaled according to the raw average firing rate of that neuron, up to a maximum size.
The first comprehensive map of mouse brain activity has been unveiled by a large international collaboration of neuroscientists.
Researchers from the International Brain Laboratory (IBL), including MIT neuroscientist Ila Fiete, published their open-access findings today in twopapers in Nature, revealing insights into how decision-making unfolds across the entire brain in mice at single-cell resolution. This brain-wide activity map challenges the traditional hierarchical view of information processing in the brain and shows that decision-making is distributed across many regions in a highly coordinated way.
“This is the first time anyone has produced a full, brain-wide map of the activity of single neurons during decision-making,” explains co-founder of IBL Alexandre Pouget. “The scale is unprecedented as we recorded from over half-a-million neurons across mice in 12 labs, covering 279 brain areas, which together represent 95 percent of the mouse brain volume. The decision-making activity, and particularly reward, lit up the brain like a Christmas tree,” adds Pouget, who is also a group leader at the University of Geneva in Switzerland.
Modeling decision-making
The brain map was made possible by a major international collaboration of neuroscientists from multiple universities, including MIT. Researchers across 12 labs used state-of-the-art silicon electrodes, called neuropixels probes, for simultaneous neural recordings to measure brain activity while mice were carrying out a decision-making task.
“Participating in the International Brain Laboratory has added new ways for our group to contribute to science,” says Fiete, who is also a professor of brain and cognitive sciences, an associate investigator at the McGovern Institute for Brain Research, and director of the K. Lisa Yang ICoN Center at MIT. “Our lab has helped standardize methods to analyze and generate robust conclusions from data. As computational neuroscientists interested in building models of how the brain works, access to brain-wide recordings is incredible: the traditional approach of recording from one or a few brain areas limited our ability to build and test theories, resulting in fragmented models. Now, we have the delightful but formidable task to make sense of how all parts of the brain coordinate to perform a behavior. Surprisingly, having a full view of the brain leads to simplifications in the models of decision-making,” says Fiete.
The labs collected data from mice performing a decision-making task with sensory, motor, and cognitive components. In the task, a mouse sits in front of a screen and a light appears on the left or right side. If the mouse then responds by moving a small wheel in the correct direction, it receives a reward.
In some trials, the light is so faint that the animal must guess which way to turn the wheel, for which it can use prior knowledge: the light tends to appear more frequently on one side for a number of trials, before the high-frequency side switches. Well-trained mice learn to use this information to help them make correct guesses. These challenging trials therefore allowed the researchers to study how prior expectations influence perception and decision-making.
Brain-wide results
The first paper, “A brain-wide map of neural activity during complex behaviour,” showed that decision-making signals are surprisingly distributed across the brain, not localized to specific regions. This adds brain-wide evidence to a growing number of studies that challenge the traditional hierarchical model of brain function, and emphasizes that there is constant communication across brain areas during decision-making, movement onset, and even reward. This means that neuroscientists will need to take a more holistic, brain-wide approach when studying complex behaviors in the future.
“The unprecedented breadth of our recordings pulls back the curtain on how the entire brain performs the whole arc of sensory processing, cognitive decision-making, and movement generation,” says Fiete. “Structuring a collaboration that collects a large standardized dataset which single labs could not assemble is a revolutionary new direction for systems neuroscience, initiating the field into the hyper-collaborative mode that has contributed to leaps forward in particle physics and human genetics. Beyond our own conclusions, the dataset and associated technologies, which were released much earlier as part of the IBL mission, have already become a massively used resource for the entire neuroscience community.”
The second paper, “Brain-wide representations of prior information,” showed that prior expectations — our beliefs about what is likely to happen based on our recent experience — are encoded throughout the brain. Surprisingly, these expectations are not only found in cognitive areas, but also brain areas that process sensory information and control actions. For example, expectations are even encoded in early sensory areas such as the thalamus, the brain’s first relay for visual input from the eye. This supports the view that the brain acts as a prediction machine, but with expectations encoded across multiple brain structures playing a central role in guiding behavior responses. These findings could have implications for understanding conditions such as schizophrenia and autism, which are thought to be caused by differences in the way expectations are updated in the brain.
“Much remains to be unpacked: If it is possible to find a signal in a brain area, does it mean that this area is generating the signal, or simply reflecting a signal generated somewhere else? How strongly is our perception of the world shaped by our expectations? Now we can generate some quantitative answers and begin the next phase experiments to learn about the origins of the expectation signals by intervening to modulate their activity,” says Fiete.
Looking ahead, the team at IBL plan to expand beyond their initial focus on decision-making to explore a broader range of neuroscience questions. With renewed funding in hand, IBL aims to expand its research scope and continue to support large-scale, standardized experiments.
New model of collaborative neuroscience
Officially launched in 2017, IBL introduced a new model of collaboration in neuroscience that uses a standardized set of tools and data processing pipelines shared across multiple labs, enabling the collection of massive datasets while ensuring data alignment and reproducibility. This approach to democratize and accelerate science draws inspiration from large-scale collaborations in physics and biology, such as CERN and the Human Genome Project.
All data from these studies, along with detailed specifications of the tools and protocols used for data collection, are openly accessible to the global scientific community for further analysis and research. Summaries of these resources can be viewed and downloaded on the IBL website under the sections: Data, Tools, Protocols.
This research was supported by grants from Wellcome, the Simons Foundation, the National Institutes of Health, the National Science Foundation, the Gatsby Charitable Foundation, and by the Max Planck Society and the Humboldt Foundation.
This brain-wide map shows 75,000 analyzed neurons. Each dot is linearly scaled according to the raw average firing rate of that neuron, up to a maximum size.
3D printing has come a long way since its invention in 1983 by Chuck Hull, who pioneered stereolithography, a technique that solidifies liquid resin into solid objects using ultraviolet lasers. Over the decades, 3D printers have evolved from experimental curiosities into tools capable of producing everything from custom prosthetics to complex food designs, architectural models, and even functioning human organs.
But as the technology matures, its environmental footprint has become increasingly difficult to set aside. The vast majority of consumer and industrial 3D printing still relies on petroleum-based plastic filament. And while “greener” alternatives made from biodegradable or recycled materials exist, they come with a serious trade-off: they’re often not as strong. These eco-friendly filaments tend to become brittle under stress, making them ill-suited for structural applications or load-bearing parts — exactly where strength matters most.
This trade-off between sustainability and mechanical performance prompted researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Hasso Plattner Institute to ask: Is it possible to build objects that are mostly eco-friendly, but still strong where it counts?
Their answer is SustainaPrint, a new software and hardware toolkit designed to help users strategically combine strong and weak filaments to get the best of both worlds. Instead of printing an entire object with high-performance plastic, the system analyzes a model through finite element analysis simulations, predicts where the object is most likely to experience stress, and then reinforces just those zones with stronger material. The rest of the part can be printed using greener, weaker filament, reducing plastic use while preserving structural integrity.
“Our hope is that SustainaPrint can be used in industrial and distributed manufacturing settings one day, where local material stocks may vary in quality and composition,” says MIT PhD student and CSAIL researcher Maxine Perroni-Scharf, who is a lead author on a paper presenting the project. “In these contexts, the testing toolkit could help ensure the reliability of available filaments, while the software’s reinforcement strategy could reduce overall material consumption without sacrificing function.”
For their experiments, the team used Polymaker’s PolyTerra PLA as the eco-friendly filament, and standard or Tough PLA from Ultimaker for reinforcement. They used a 20 percent reinforcement threshold to show that even a small amount of strong plastic goes a long way. Using this ratio, SustainaPrint was able to recover up to 70 percent of the strength of an object printed entirely with high-performance plastic.
They printed dozens of objects, from simple mechanical shapes like rings and beams to more functional household items such as headphone stands, wall hooks, and plant pots. Each object was printed three ways: once using only eco-friendly filament, once using only strong PLA, and once with the hybrid SustainaPrint configuration. The printed parts were then mechanically tested by pulling, bending, or otherwise breaking them to measure how much force each configuration could withstand.
In many cases, the hybrid prints held up nearly as well as the full-strength versions. For example, in one test involving a dome-like shape, the hybrid version outperformed the version printed entirely in Tough PLA. The team believes this may be due to the reinforced version’s ability to distribute stress more evenly, avoiding the brittle failure sometimes caused by excessive stiffness.
“This indicates that in certain geometries and loading conditions, mixing materials strategically may actually outperform a single homogenous material,” says Perroni-Scharf. “It’s a reminder that real-world mechanical behavior is full of complexity, especially in 3D printing, where interlayer adhesion and tool path decisions can affect performance in unexpected ways.”
A lean, green, eco-friendly printing machine
SustainaPrint starts off by letting a user upload their 3D model into a custom interface. By selecting fixed regions and areas where forces will be applied, the software then uses an approach called “Finite Element Analysis” to simulate how the object will deform under stress. It then creates a map showing pressure distribution inside the structure, highlighting areas under compression or tension, and applies heuristics to segment the object into two categories: those that need reinforcement, and those that don’t.
Recognizing the need for accessible and low-cost testing, the team also developed a DIY testing toolkit to help users assess strength before printing. The kit has a 3D-printable device with modules for measuring both tensile and flexural strength. Users can pair the device with common items like pull-up bars or digital scales to get rough, but reliable performance metrics. The team benchmarked their results against manufacturer data and found that their measurements consistently fell within one standard deviation, even for filaments that had undergone multiple recycling cycles.
Although the current system is designed for dual-extrusion printers, the researchers believe that with some manual filament swapping and calibration, it could be adapted for single-extruder setups, too. In current form, the system simplifies the modeling process by allowing just one force and one fixed boundary per simulation. While this covers a wide range of common use cases, the team sees future work expanding the software to support more complex and dynamic loading conditions. The team also sees potential in using AI to infer the object’s intended use based on its geometry, which could allow for fully automated stress modeling without manual input of forces or boundaries.
3D for free
The researchers plan to release SustainaPrint open-source, making both the software and testing toolkit available for public use and modification. Another initiative they aspire to bring to life in the future: education. “In a classroom, SustainaPrint isn’t just a tool, it’s a way to teach students about material science, structural engineering, and sustainable design, all in one project,” says Perroni-Scharf. “It turns these abstract concepts into something tangible.”
As 3D printing becomes more embedded in how we manufacture and prototype everything from consumer goods to emergency equipment, sustainability concerns will only grow. With tools like SustainaPrint, those concerns no longer need to come at the expense of performance. Instead, they can become part of the design process: built into the very geometry of the things we make.
Co-author Patrick Baudisch, who is a professor at the Hasso Plattner Institute, adds that “the project addresses a key question: What is the point of collecting material for the purpose of recycling, when there is no plan to actually ever use that material? Maxine presents the missing link between the theoretical/abstract idea of 3D printing material recycling and what it actually takes to make this idea relevant.”
Perroni-Scharf and Baudisch wrote the paper with CSAIL research assistant Jennifer Xiao; MIT Department of Electrical Engineering and Computer Science master’s student Cole Paulin ’24; master’s student Ray Wang SM ’25 and PhD student Ticha Sethapakdi SM ’19 (both CSAIL members); Hasso Plattner Institute PhD student Muhammad Abdullah; and Associate Professor Stefanie Mueller, lead of the Human-Computer Interaction Engineering Group at CSAIL.
The researchers’ work was supported by a Designing for Sustainability Grant from the Designing for Sustainability MIT-HPI Research Program. Their work will be presented at the ACM Symposium on User Interface Software and Technology in September.
A new software and hardware toolkit called SustainaPrint can help users strategically combine strong and weak filaments to achieve the best of both worlds. Instead of printing an entire object with high-performance plastic, the system analyzes a model, predicts where the object is most likely to experience stress, and reinforces those zones with stronger material.
3D printing has come a long way since its invention in 1983 by Chuck Hull, who pioneered stereolithography, a technique that solidifies liquid resin into solid objects using ultraviolet lasers. Over the decades, 3D printers have evolved from experimental curiosities into tools capable of producing everything from custom prosthetics to complex food designs, architectural models, and even functioning human organs.
But as the technology matures, its environmental footprint has become increasingly difficult to set aside. The vast majority of consumer and industrial 3D printing still relies on petroleum-based plastic filament. And while “greener” alternatives made from biodegradable or recycled materials exist, they come with a serious trade-off: they’re often not as strong. These eco-friendly filaments tend to become brittle under stress, making them ill-suited for structural applications or load-bearing parts — exactly where strength matters most.
This trade-off between sustainability and mechanical performance prompted researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Hasso Plattner Institute to ask: Is it possible to build objects that are mostly eco-friendly, but still strong where it counts?
Their answer is SustainaPrint, a new software and hardware toolkit designed to help users strategically combine strong and weak filaments to get the best of both worlds. Instead of printing an entire object with high-performance plastic, the system analyzes a model through finite element analysis simulations, predicts where the object is most likely to experience stress, and then reinforces just those zones with stronger material. The rest of the part can be printed using greener, weaker filament, reducing plastic use while preserving structural integrity.
“Our hope is that SustainaPrint can be used in industrial and distributed manufacturing settings one day, where local material stocks may vary in quality and composition,” says MIT PhD student and CSAIL researcher Maxine Perroni-Scharf, who is a lead author on a paper presenting the project. “In these contexts, the testing toolkit could help ensure the reliability of available filaments, while the software’s reinforcement strategy could reduce overall material consumption without sacrificing function.”
For their experiments, the team used Polymaker’s PolyTerra PLA as the eco-friendly filament, and standard or Tough PLA from Ultimaker for reinforcement. They used a 20 percent reinforcement threshold to show that even a small amount of strong plastic goes a long way. Using this ratio, SustainaPrint was able to recover up to 70 percent of the strength of an object printed entirely with high-performance plastic.
They printed dozens of objects, from simple mechanical shapes like rings and beams to more functional household items such as headphone stands, wall hooks, and plant pots. Each object was printed three ways: once using only eco-friendly filament, once using only strong PLA, and once with the hybrid SustainaPrint configuration. The printed parts were then mechanically tested by pulling, bending, or otherwise breaking them to measure how much force each configuration could withstand.
In many cases, the hybrid prints held up nearly as well as the full-strength versions. For example, in one test involving a dome-like shape, the hybrid version outperformed the version printed entirely in Tough PLA. The team believes this may be due to the reinforced version’s ability to distribute stress more evenly, avoiding the brittle failure sometimes caused by excessive stiffness.
“This indicates that in certain geometries and loading conditions, mixing materials strategically may actually outperform a single homogenous material,” says Perroni-Scharf. “It’s a reminder that real-world mechanical behavior is full of complexity, especially in 3D printing, where interlayer adhesion and tool path decisions can affect performance in unexpected ways.”
A lean, green, eco-friendly printing machine
SustainaPrint starts off by letting a user upload their 3D model into a custom interface. By selecting fixed regions and areas where forces will be applied, the software then uses an approach called “Finite Element Analysis” to simulate how the object will deform under stress. It then creates a map showing pressure distribution inside the structure, highlighting areas under compression or tension, and applies heuristics to segment the object into two categories: those that need reinforcement, and those that don’t.
Recognizing the need for accessible and low-cost testing, the team also developed a DIY testing toolkit to help users assess strength before printing. The kit has a 3D-printable device with modules for measuring both tensile and flexural strength. Users can pair the device with common items like pull-up bars or digital scales to get rough, but reliable performance metrics. The team benchmarked their results against manufacturer data and found that their measurements consistently fell within one standard deviation, even for filaments that had undergone multiple recycling cycles.
Although the current system is designed for dual-extrusion printers, the researchers believe that with some manual filament swapping and calibration, it could be adapted for single-extruder setups, too. In current form, the system simplifies the modeling process by allowing just one force and one fixed boundary per simulation. While this covers a wide range of common use cases, the team sees future work expanding the software to support more complex and dynamic loading conditions. The team also sees potential in using AI to infer the object’s intended use based on its geometry, which could allow for fully automated stress modeling without manual input of forces or boundaries.
3D for free
The researchers plan to release SustainaPrint open-source, making both the software and testing toolkit available for public use and modification. Another initiative they aspire to bring to life in the future: education. “In a classroom, SustainaPrint isn’t just a tool, it’s a way to teach students about material science, structural engineering, and sustainable design, all in one project,” says Perroni-Scharf. “It turns these abstract concepts into something tangible.”
As 3D printing becomes more embedded in how we manufacture and prototype everything from consumer goods to emergency equipment, sustainability concerns will only grow. With tools like SustainaPrint, those concerns no longer need to come at the expense of performance. Instead, they can become part of the design process: built into the very geometry of the things we make.
Co-author Patrick Baudisch, who is a professor at the Hasso Plattner Institute, adds that “the project addresses a key question: What is the point of collecting material for the purpose of recycling, when there is no plan to actually ever use that material? Maxine presents the missing link between the theoretical/abstract idea of 3D printing material recycling and what it actually takes to make this idea relevant.”
Perroni-Scharf and Baudisch wrote the paper with CSAIL research assistant Jennifer Xiao; MIT Department of Electrical Engineering and Computer Science master’s student Cole Paulin ’24; master’s student Ray Wang SM ’25 and PhD student Ticha Sethapakdi SM ’19 (both CSAIL members); Hasso Plattner Institute PhD student Muhammad Abdullah; and Associate Professor Stefanie Mueller, lead of the Human-Computer Interaction Engineering Group at CSAIL.
The researchers’ work was supported by a Designing for Sustainability Grant from the Designing for Sustainability MIT-HPI Research Program. Their work will be presented at the ACM Symposium on User Interface Software and Technology in September.
A new software and hardware toolkit called SustainaPrint can help users strategically combine strong and weak filaments to achieve the best of both worlds. Instead of printing an entire object with high-performance plastic, the system analyzes a model, predicts where the object is most likely to experience stress, and reinforces those zones with stronger material.
The Pre-read Assembly focused on Gordin's 2021 book “On the Fringe,” which incoming students read over the summer as their introduction to the intellectual life of the University.
Court victory for Harvard in research funding fight
Government acted unlawfully when it cut grants, ruling says
Alvin Powell
Harvard Staff Writer
4 min read
A U.S. District Court in Boston on Wednesday struck down the federal government’s cancellation of $2.2 billion in research funding to Harvard, rejecting as unconstitutional the government’s attempt to force campus changes that the University argues would violate its First Amendment rights and academic freedom.
The suit seeking restoration of the funding was filed by Harvard in April in response to a freeze order issued by the government.
The summary judgment, issued by U.S. District Judge Allison Burroughs, pushed back against the Trump administration’s claim that it is acting in response to antisemitism at Harvard.
While combating antisemitism is “indisputably an important and worthy objective,” the court wrote, the allegations amount to little more than “a smoke screen” for broad demands that seek to bring the University into line with the administration’s preferred ideological viewpoint. This effort violates Harvard’s First Amendment rights and ignores procedural requirements laid out in federal law, the court ruled.
“The idea that fighting antisemitism is Defendants’ true aim is belied by the fact that the majority of the demands they are making of Harvard to restore its research funding are directed, on their face, at Harvard’s governance, staffing and hiring practices, and admissions policies—all of which have little to do with antisemitism and everything to do with Defendants’ power and political views,” the court said, adding: “The First Amendment is important and the right to free speech must be zealously guarded. Free speech has always been a hallmark of our democracy.”
Harvard President Alan Garber responded to the summary judgment in a note to the community on Wednesday.
“The ruling affirms Harvard’s First Amendment and procedural rights, and validates our arguments in defense of the University’s academic freedom, critical scientific research, and the core principles of American higher education,” he said. “Even as we acknowledge the important principles affirmed in today’s ruling, we will continue to assess the implications of the opinion, monitor further legal developments, and be mindful of the changing landscape in which we seek to fulfill our mission.”
The court’s decision, which the Trump administration has said it will appeal, is the latest development in a fight that has escalated since April 11, when the government attached federal funding to Harvard to a series of demands that included changes to governance and hiring practices and viewpoint audits of faculty, students, and staff.
“No government — regardless of which party is in power — should dictate what private universities can teach, whom they can admit and hire, and which areas of study and inquiry they can pursue,” Garber wrote in response to the demands. Hours later, the administration announced it was freezing $2.2 billion in grants and $60 million in contracts to the University, funds that were later terminated.
The abrupt halt to funding, which included multiyear grants and contracts, has broadly and severely disrupted University research. It has endangered investigations into everything from cancer to Alzheimer’s to climate change and triggered a scramble for alternate funding sources.
In May, Garber and Provost John Manning announced a $250 million fund intended to stabilize the University’s research enterprise while the dispute with the Trump administration played out. In a July update, Garber, Manning, and University financial officers said the federal government’s actions could cost Harvard as much as $1 billion annually.
The government has maintained that antisemitic incidents on campus violated civil rights laws and that Harvard’s attempts to address the situation have been inadequate. The University has countered by pointing to several actions aimed at fighting campus bias, including the release of task force reports focused on antisemitism and anti-Muslim bias. Garber described the reports as “hard hitting and painful” and pledged that they would be the basis for additional reforms.
The court noted these actions in its ruling.
“We must fight against antisemitism, but we equally need to protect our rights, including our right to free speech, and neither goal should nor needs to be sacrificed on the altar of the other. Harvard is currently, even if belatedly, taking steps it needs to take to combat antisemitism and seems willing to do even more if need be. Now it is the job of the courts to similarly step up, to act to safeguard academic freedom and freedom of speech as required by the Constitution, and to ensure that important research is not improperly subjected to arbitrary and procedurally infirm grant terminations, even if doing so risks the wrath of a government committed to its agenda no matter the cost.”
Does this cellphone habit raise risk of hemorrhoids?
File photo by Niles Singer/Harvard Staff Photographer
Jacqueline Mitchell
BIDMC Communications
7 min read
Gastroenterologist Trisha Pasricha discusses why new findings may change how you think about bathroom routines
Hemorrhoids are among the most frequent gastrointestinal complaints in the United States, sending millions of people to clinics and emergency rooms each year and costing the health system hundreds of millions of dollars. Despite their prevalence, the causes remain poorly defined. Constipation, straining, pregnancy, and low-fiber diets have all been implicated, but physician-investigator Trisha Pasricha and colleagues wondered whether the modern habit of lingering in the bathroom with a phone might also play a role.
In a study of 125 adults undergoing routine colonoscopy at Beth Israel Deaconess Medical Center, the team surveyed participants about toilet habits, smartphone use, diet, and activity levels, then compared responses with direct colonoscopy findings. The results revealed some surprising patterns:
Two-thirds of participants admitted to using their phones on the toilet.
Smartphone use on the toilet was associated with a 46 percent increased risk of having hemorrhoids.
Phone users were five times more likely to sit for more than five minutes per trip.
Younger adults were especially prone to the habit.
Smartphone users reported less weekly exercise than non-users.
We asked Pasricha what these findings, published in PLOS One, mean for patients and how they might change the way we think about everyday bathroom routines.
What inspired your study on hemorrhoids and smartphone use?
I’m a gastroenterologist and I’m also writing a book that’s coming out in the spring called “You’ve Been Pooping All Wrong.” And one of the chapters I’m writing is about hemorrhoids.
My colleagues and I in GI, we all tell our patients not to spend longer than a couple minutes on the toilet. We all have this sense that spending too long on the toilet is bad for you. But when I was writing this book chapter, I went back to literature to see what is this five-minute rule really based on? And the data is pretty sparse out there.
I was struck by this fantastic old study from 1989 in The Lancet on hemorrhoids and reading the newspaper on the toilet. It was a very simple study of about 100 patients. They looked at how many patients read the newspaper on the toilet and then doctors had a look to see how many of them had hemorrhoids. That made it into The Lancet!
But that study did find that there was this association. More hemorrhoids were found amongst people who spent time reading on the toilet. Now in 2025, I don’t think anyone’s reading the newspaper, but we know everybody’s on their phones in the bathroom. So I thought we needed to update this literature for the modern TikTok era.
“Let me just say it: We did suspect there was going to be a sex divide. We stratified our data by sex and you can see there’s a trend that men are spending more time on the toilet.”
What did your team investigate and what did you find?
In our study, about two-thirds of people reported using smartphones on the toilet. When we asked those users whether they ever sat on the toilet longer than they intended because of their phone, only about half said yes.
But here’s the interesting part: Smartphone users were five times more likely to spend more than five minutes on the toilet compared to non-users. So clearly, people are spending more time — but only half of them recognize that their phone is the reason.
We also looked at whether constipation or straining might explain the extra time, but there were no differences between the groups. That suggests the phone itself is driving the behavior. I think what’s happening is that time sort of slows down when you’re scrolling, and people don’t realize just how much longer they’re sitting there. Half admit it, but the other half are still doing it without making the connection.
Let’s go back to basics — what even are hemorrhoids? How serious are they?
Well, they’re more of a nuisance than something that’s really going to, you know, kill you, but they can really impact somebody’s quality of life. In terms of healthcare expenditure, they are the third most common reason people see their doctors.
We all have hemorrhoid cushions — vascular cushions made of blood vessels, connective tissue, and smooth muscle — right at the end of our GI tract. That’s a normal part of our body. It’s only when they get engorged that we notice them and that that’s when they become symptomatic and they’re called hemorrhoids.
Hemorrhoid cushions serve this purpose where they help provide a barrier between all of the stool and gas that’s in your body and the outside world. They can kind of tell the difference between solid, gas, liquid, and help detect when it’s safe to pass gas but not have a whole bowel movement in public. They are a cushiony barrier that that saves you from … social embarrassment.
When they become engorged internally, they often bleed and become uncomfortable. The external ones are the ones that can cause itching. It can feel like there’s a bump, it can feel like there’s something there and they’re difficult to clean. And then that kind of like causes this vicious cycle of like more discomfort, more irritation. The more angry they get, the more likely they are to bleed. It can really just spiral.
How do social dynamics — particularly sex differences — influence bathroom habits and related health issues?
I love that you asked that. Let me just say it: We did suspect there was going to be a sex divide. We stratified our data by sex and you can see there’s a trend that men are spending more time on the toilet. That surprised absolutely nobody in our in our group. But we were underpowered to really prove that statistically, so I think that is the question we should ask for our next study.
There’s also something to be said about how in general with GI concerns, women are more likely to get help. Actually, they’re more likely to seek treatment in general, and sometimes men are a little bit less likely to get help and talk to their providers. We often see men who come in having been dragged in by their wives and they’re like, “I actually don’t have a problem.” And their wives say, “You spend 40 minutes in the bathroom every morning. There’s a problem.”
Sometimes hearing it from another person can make you realize what you are experiencing is not normal.
Why is it important to normalize conversations about gut health and other “embarrassing” topics?
I went into gastroenterology because I saw as a med student and early on in my life that you can have almost everything you want in life, but if you can’t eat the food you love and poop it out comfortably, you don’t have a quality of life. And if we can’t bring ourselves to even talk about it, we can’t get help and physicians can’t help our patients.
It’s a big passion of mine to try to normalize these conversations. The gut is just like any other part of your body and I think we need to treat it with respect and love.
These images are a composite of separate exposures of the “little red dots” acquired by the James Webb Space Telescope using its primary near-infrared imager, which provides high-resolution imaging and spectroscopy for observing the early universe.
Image: NASA, ESA, CSA, STScI, Dale Kocevski (Colby College)
Kermit Pattison
Harvard Staff Writer
5 min read
Astrophysicists think mysterious ‘little red dots’ are generated from spinning dark matter, studying them may yield insights into evolution of universe
For more than two years, astronomers have been puzzled by a mysterious discovery from the ancient universe — hundreds of objects known as “little red dots” so far away the light had to travel billions of years to become visible to scientists.
First detected by the James Webb Space Telescope, these unusually compact vestiges from the cosmic dawn have sparked intense debate: Are they densely packed galaxies? Or do they contain massive black holes?
Now two Harvard astrophysicists have proposed a new theory: These distant objects are new galaxies being formed inside slowly spinning halos of dark matter — and studying them may yield important new insights into the formation of the universe.
“Telescopes are time machines,” said Fabio Pacucci, a Clay Fellow in the Harvard-Smithsonian Center for Astrophysics and first author of the new paper. “If you look at the moon, you see it as it was one second ago, and if you look at the sun, as it was eight minutes ago. If you look at these little red dots, it was billions of years ago.”
Dark matter halos are believed to play prominent roles in the birth of galaxies and the evolution of the universe. (Dark matter is a mysterious substance that remains invisible because it does not absorb, reflect or emit light, but it is believed to comprise the vast majority of matter in the universe.)
“If you look at the moon, you see it as it was one second ago, and if you look at the sun, as it was eight minutes ago. If you look at these little red dots, it was billions of years ago.”
Fabio Pacucci.
File photo by Aaron Ye
The halos have not been directly observed, and their existence is based on inferences from other observations, including the motions of stars and gas, and the bending of light.
“A dark matter halo is a cradle to form a galaxy,” said Pacucci. “The bigger the dark matter halo, the bigger the galaxy at the center.”
The enigmatic dots are the most startling discoveries from the most powerful telescope ever launched into space. Launched in 2021, the James Webb Space Telescope (JWST) was designed to study the “cosmic dawn” — the epoch when the first stars and black holes were born after the Big Bang 13.8 billion years ago.
Orbiting the sun about 1 million miles away from Earth, the JWST spotted hundreds of unusually red and compact sources dubbed the “little red dots” (LRDs).
Their distinctive color is caused by a combination of effects, including the presence of dust and the phenomenon of “redshift” (in which light shifts to the red end of the spectrum as it travels vast distances).
LRDs appeared about 600 million years after the Big Bang and then later vanished.
The dots are unusually compact and relatively bright, suggesting they either host enormous black holes (which shine brightly despite their name) or pack inconceivable numbers of stars into galaxies only one-fiftieth the size of our Milky Way.
“They are like cosmic fireworks,” said Pacucci. “They magically appear, and they are very visible for about 1 billion years. Then they just disappear.”
In a paper published recently in The Astrophysical Journal Letters, Pacucci and Abraham “Avi” Loeb, Frank B. Baird Jr. Professor of Science, propose the theory of galaxies being formed inside slowly spinning halos of dark matter to explain the abundance, compactness, and redshift distribution of LRDs.
In their model, the LRDs discovered thus far represent only the very slowest-spinning galaxies — the bottom 1 percent of the distribution.
In other words, the LRDs are not a fundamentally distinct population of galaxies, but just a small subset that exhibit unusual properties.
Some of their seemingly mysterious features may arise from observational bias: Current technologies only can detect compact, low-spin halos because they concentrate light in bright cores; meanwhile larger, more diffuse galaxies at higher redshifts remain invisible — despite being more common.
Loeb said, “If you assume the little red dots are typically in the first percentile of the spin distribution of dark matter halos, then you explain all their observational properties.”
Dale Kocevski, a leading researcher of LRDs and chair of the physics and astronomy department at Colby College, said the theory proposed by Pacucci and Loeb “makes a lot of sense.”
“This potentially adds to our fundamental understanding of these objects,” he said. “In addition, it provides a physical model that we can test going forward.”
Pacucci believes that the LRDs eventually will prove to be the signature discovery of the JWST. (Like many colleagues, he also suspects the little red dots do contain supermassive black holes, but that is not part of the new theory.)
He predicts they will generate new insights about the formation of galaxies and black holes during the cosmic dawn.
“We are now debating what is the nature of a fundamentally new kind of galaxy that we’ve never seen before,” Pacucci said. “This will fundamentally change how we view the early evolution of the universe.”
Music appears to decrease anxiety, discomfort in ER patients, study finds
Anna Lamb
Harvard Staff Writer
4 min read
Playwright William Congreve wrote in the Restoration period that music “hath charms to soothe a savage breast.” And, as it turns out, back pain in 21st-century patients as well.
Back pain is a widespread problem across the nation, and its causes are often complex and difficult to treat with traditional medications. Millions annually suffer cases so acute they end up in emergency rooms.
A new Harvard study has found that patients who listened to music while in the emergency department for back pain showed decreased anxiety levels, which in turn decreased discomfort.
“There are a lot of reasons why people have back pain. It can be nerve-related, spinal cord issues, nerve compression — all of which don’t have a quick solution,” said Charlotte Goldfine, lead author and instructor in emergency medicine at Harvard Medical School. “Often we are using temporary methods like anti-inflammatory medications or analgesics, and in severe cases, opioid medications.”
There are more than 2.6 million emergency department visits for pain in the U.S. each year, the study states. That’s 4.4 percent of all emergency department visits worldwide. Following the success of music therapy in other areas of medicine, Goldfine said the team made the leap that the intervention could show success for emergency back-pain patients.
“Music has been used in other settings and studied in the pre- and peri-operative space, as well as with pain management,” she said. “We were really thinking through how we could translate the work that had already been done in patients who were getting procedures or who had more painful scenarios. What we found is that it’s also a very easily deployable solution.”
Scott Weiner, co-author of the study, added that besides the ease of implementation — simply providing headphones and a music player to patients — the intervention is extremely cost-effective. Weiner is an associate professor of emergency medicine at Harvard Medical School and an emergency physician at Brigham and Women’s Hospital.
“I drew inspiration from another similar study that looked at adult coloring books given to patients. It seems like just something to get your mind off of it, whether it be reading or music or coloring, is probably helpful.”
Scott Weiner, co-author of the study
“It’s completely free besides the subscription for the music,” Weiner said.
The reason why the intervention is especially helpful, both Weiner and Goldfine said, is not because it targets root causes of the pain but because it may reduce exacerbating anxiety.
“Thinking about this originally, I drew inspiration from another similar study that looked at adult coloring books given to patients,” Weiner said. “It seems like just something to get your mind off of it, whether it be reading or music or coloring, is probably helpful.”
Emergency room patients often face overcrowded conditions and long waits, even when experiencing severe pain.
“It’s stressful because you’re there watching everything unfold,” Weiner said. “There are beeps; there’s chaos; and plus, they’re in pain. So the fact that maybe this aspect of distraction was enough to reduce their pain and anxiety with basically no harm at all to the patient is pretty remarkable.”
In the experiment, patients selected music to listen to for 10 minutes. Doctors then surveyed their pain at rest and with movement on a 10-point scale, before and after, and had them complete an anxiety questionnaire.
“We had a lot of discussions over what the best type of music intervention to choose would be,” Goldfine said.
But ultimately, she said, the music that patients find relaxing is subjective. Some people used a curated relaxation playlist. Many chose pop music.
“There was some Taylor Swift in there,” Goldfine said.
She added that doctors and patients themselves can implement the lessons from the study right now.
“I try to use it when I’m doing procedures on patients,” she said. “I have them put on whatever song they want, because I feel like it really does enhance the experience and with really no downside.”
Goldfine, Weiner, and their team are continuing to study the impact of music therapy in medicine. Currently, a study is in the works looking at connections between music and substance-use disorder.
The study found heart failure rates were higher in flooded areas, especially in New Jersey, and that the risk persisted for four to five years – not just weeks or months – after the storm.
Jessica Bodner is a Professor of the Practice and Violist of the Parker Quartet.
A local gem
Fresh Pond
Fresh Pond is a true Cambridge gem — beautiful in every season, and one of those rare places where you can actually feel the passage of the seasons day by day. It’s where you can go to be alone, walk with a friend, or feel the quiet energy of a community sharing space. I run there regularly — it’s even where I casually ran my first half-marathon, looping around over and over again. It’s also been a joyful spot for our family: our vizsla Bodie ran free there for years, and when the time comes, we’ll likely bring a new puppy to those same trails. My husband and I have taken our son there since he was a newborn — for nature walks, bike rides, and countless discoveries along the way. It’s one of my happiest places in the world.
A musical moment
Thom Yorke and Jonny Greenwood campfire video
As a classical musician who’s spent my entire career playing chamber music, I’m always inspired by deep musical collaboration. I’ve loved Radiohead for 25 years, but seeing Thom Yorke and Jonny Greenwood play together in such an intimate setting still blew my mind. After decades of making music together, the trust and ease between them is incredible — it’s the kind of connection that unlocks a magical force that only art and collaboration can tap into.
A little luxury
Good coffee anywhere
My husband and I travel quite a bit to play concerts and guest teach, and it became so frustrating and sometimes deeply disappointing to look for good coffee — hotel coffee can sometimes be the absolute worst! We took matters into our own hands and started traveling with our own coffee system — Miir mugs, an Aeropress, a hand grinder, a collapsible kettle, and of course, coffee beans.
The Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) announced that Daniela Giardina has been named the new J-WAFS executive director. Giardina stepped into the role at the start of the fall semester, replacing founding executive director Renee J. Robins ’83, who is retiring after leading the program since its launch in 2014.
“Daniela brings a deep background in water and food security, along with excellent management and leadership skills,” says Robins. “Since I first met her nearly 10 years ago, I have been impressed with her commitment to working on global water and food challenges through research and innovation. I am so happy to know that I will be leaving J-WAFS in her experienced and capable hands.”
A decade of impact
J-WAFS fuels research, innovation, and collaboration to solve global water and food systems challenges. The mission of J-WAFS is to ensure safe and resilient supplies of water and food to meet the local and global needs of a dramatically growing population on a rapidly changing planet. J-WAFS funding opportunities are open to researchers in every MIT department, lab, and center, spanning all disciplines. Supported research projects include those involving engineering, science, technology, business, social science, economics, architecture, urban planning, and more. J-WAFS research and related activities include early-stage projects, sponsored research, commercialization efforts, student activities and mentorship, events that convene local and global experts, and international-scale collaborations.
The global water, food, and climate emergency makes J-WAFS’ work both timely and urgent. J-WAFS-funded researchers are achieving tangible, real-time solutions and results. Since its inception, J-WAFS has distributed nearly $26 million in grants, fellowships, and awards to the MIT community, supporting roughly 10 percent of MIT’s faculty and 300 students, postdocs, and research staff from 40 MIT departments, labs, and centers. J-WAFS grants have also helped researchers launch 13 startups and receive over $25 million in follow-on funding.
Giardina joins J-WAFS at an exciting time in the program’s history; in the spring, J-WAFS celebrated 10 years of supporting water and food research at MIT. The milestone was commemorated at a special event attended by MIT leadership, researchers, students, staff, donors, and others in the J-WAFS community. As J-WAFS enters its second decade, interest and opportunities for water and food research continue to grow. “I am truly honored to join J-WAFS at such a pivotal moment,” Giardina says.
Putting research into real-world practice
Giardina has nearly two decades of experience working with nongovernmental organizations and research institutions on humanitarian and development projects. Her work has taken her to Africa, Latin America, the Caribbean, and Central and Southeast Asia, where she has focused on water and food security projects. She has conducted technical trainings and assessments, and managed projects from design to implementation, including monitoring and evaluation.
Giardina comes to MIT from Oxfam America, where she directed disaster risk reduction and climate resilience initiatives, working on approaches to strengthen local leadership, community-based disaster risk reduction, and anticipatory action. Her role at Oxfam required her to oversee multimillion-dollar initiatives, supervising international teams, managing complex donor portfolios, and ensuring rigorous monitoring across programs. She connected hands-on research with community-oriented implementation, for example, by partnering with MIT’s D-Lab to launch an innovation lab in rural El Salvador. Her experience will help guide J-WAFS as it pursues impactful research that will make a difference on the ground.
Beyond program delivery, Giardina has played a strategic leadership role in shaping Oxfam’s global disaster risk reduction strategy and representing the organization at high-level U.N. and academic forums. She is multilingual and adept at building partnerships across cultures, having worked with governments, funders, and community-based organizations to strengthen resilience and advance equitable access to water and food.
Giardina holds a PhD in sustainable development from the University of Brescia in Italy. She also holds a master’s degree in environmental engineering from the Politecnico of Milan in Italy and is a chartered engineer since 2005 (equivalent to a professional engineering license in the United States). She also serves as vice chair of the Boston Network for International Development, a nonprofit that connects and strengthens Boston’s global development community.
“I have seen first-hand how climate change, misuse of resources, and inequality are undermining water and food security around the globe,” says Giardina. “What particularly excites me about J-WAFS is its interdisciplinary approach in facilitating meaningful partnerships to solve many of these problems through research and innovation. I am eager to help expand J-WAFS’ impact by strengthening existing programs, developing new initiatives, and building strategic partnerships that translate MIT's groundbreaking research into real-world solutions,” she adds.
A legacy of leadership
Renee Robins will retire with over 23 years of service to MIT. Years before joining the staff, she graduated from MIT with dual bachelor’s degrees in both biology and humanities/anthropology. She then went on to earn a master’s degree in public policy from Carnegie Mellon University. In 1998, she came back to MIT to serve in various roles across campus, including with the Cambridge MIT Institute, the MIT Portugal Program, the Mexico City Program, the Program on Emerging Technologies, and the Technology and Policy Program. She also worked at the Harvard Graduate School of Education, where she managed a $15 million research program as it scaled from implementation in one public school district to 59 schools in seven districts across North Carolina.
In late 2014, Robins joined J-WAFS as its founding executive director, playing a pivotal role in building it from the ground up and expanding the team to six full-time professionals. She worked closely with J-WAFS founding director Professor John H. Lienhard V to develop and implement funding initiatives, develop, and shepherd corporate-sponsored research partnerships, and mentor students in the Water Club and Food and Agriculture Club, as well as numerous other students. Throughout the years, Robins has inspired a diverse range of researchers to consider how their capabilities and expertise can be applied to water and food challenges. Perhaps most importantly, her leadership has helped cultivate a vibrant community, bringing together faculty, students, and research staff to be exposed to unfamiliar problems and new methodologies, to explore how their expertise might be applied, to learn from one another, and to collaborate.
At the J-WAFS 10th anniversary event in May, Robins noted, “it has been a true privilege to work alongside John Lienhard, our dedicated staff, and so many others. It’s been particularly rewarding to see the growth of an MIT network of water and food researchers that J-WAFS has nurtured, which grew out of those few individuals who saw themselves to be working in solitude on these critical challenges.”
Lienhard also spoke, thanking Robins by saying she “was my primary partner in building J-WAFS and [she is] a strong leader and strategic thinker.”
Not only is Robins a respected leader, she is also a dear friend to so many at MIT and beyond. In 2021, she was recognized for her outstanding leadership and commitment to J-WAFS and the Institute with an MIT Infinite Mile Award in the area of the Offices of the Provost and Vice President for Research.
Outside of MIT, Robins has served on the Board of Trustees for the International Honors Program — a comparative multi-site study abroad program, where she previously studied comparative culture and anthropology in seven countries around the world. Robins has also acted as an independent consultant, including work on program design and strategy around the launch of the Université Mohammed VI Polytechnique in Morocco.
Continuing the tradition of excellence
Giardina will report to J-WAFS director Rohit Karnik, the Abdul Latif Jameel Professor of Water and Food in the MIT Department of Mechanical Engineering. Karnik was named the director of J-WAFS in January, succeeding John Lienhard, who retired earlier this year.
As executive director, Giardina will be instrumental in driving J-WAFS’ mission and impact. She will work with Karnik to help shape J-WAFS’ programs, long-term strategy, and goals. She will also be responsible for supervising J-WAFS staff, managing grant administration, and overseeing and advising on financial decisions.
“I am very grateful to John and Renee, who have helped to establish J-WAFS as the Institute’s preeminent program for water and food research and significantly expanded MIT’s research efforts and impact in the water and food space,” says Karnik. “I am confident that with Daniela as executive director, J-WAFS will continue in the tradition of excellence that Renee and John put into place, as we move into the program’s second decade,” he notes.
Giardina adds, “I am inspired by the lab’s legacy of Renee Robins and Professor Lienhard, and I look forward to working with Professor Karnik and the J-WAFS staff.”
Researcher outlines vision for implants that could help patients reclaim vital connection with their surroundings
One of the more baffling COVID symptoms is the loss of the sense of smell, which can persist long after the virus fades. According to research from Mass Eye and Ear, more than 20 million COVID patients lost smell or taste in 2021 alone. Roughly 27 percent of them had no or limited recovery.
For people with olfactory dysfunction (partial loss of smell) or anosmia (total or near-total loss of smell), the impact can be grave. The absence of smell can dull the taste of food, make cooking dangerous, and heighten anxieties around gas leaks and fires. It can even lead to depression in up to a third of patients. While some people regain partial function over time, effective treatments are rare.
A Harvard scientist hopes to change that.
“As I’ve gone from medical school into residency, into otolaryngology, and learned about cochlear implants — implants for hearing — it was always a thought,” said Eric Holbrook, director of the Division of Rhinology at Mass Eye and Ear and an associate professor of otolaryngology at Harvard Medical School. “Could you do the same for the sense of smell?”
In 2019, Holbrook published research looking into the question. He used small electrodes implanted in the brains of five patients to see if he could stimulate smell. In three of the patients, it worked — proof, Holbrook says, that olfactory implants can restore smell. Meanwhile, Dan Coelho and Richard Costanzo, both of Virginia Commonwealth University, were experimenting with electrode stimulation of the olfactory system in mice. The three scientists soon joined forces, leading to an informal meeting at a Dubai conference of otolaryngologists — the first international meeting, Holbrook said, focused on assistive devices for smell.
“Enough people were actually showing that they were working toward this,” he said. “It was just learning what people were working on in Europe, in Japan, that sort of thing. We didn’t realize they were pushing forward with this idea, also.”
Last month, Holbrook, Coelho, Costanzo, and global colleagues released an international opinion paper on emerging olfactory implant technologies in the journal Rhinology.
The development of olfactory implants presents a dizzying array of complications. In an intact olfactory system, odor molecules bind to chemoreceptors in the olfactory epithelium — layers of thin, mucus-lined tissue inside the nose. There, nerves transmit signals to the olfactory bulb, which effectively “maps” smells by pairing specific chemoreceptors with specific spherical structures called glomeruli. Each odor stimulates multiple receptor types, lighting up several parts of the bulb at once.
This makes smell far trickier to replicate than hearing, where sound frequencies map neatly along the cochlea.
Still, the prospect is tantalizing. The current scheme involves placing an electrode array near the olfactory bulb, bypassing damaged nasal neurons to stimulate the brain directly. Similar to a cochlear implant, an external receiver, possibly hidden in glasses or a headpiece, would be paired with an internal electrode through magnetic coupling.
Safety challenges are significant. The olfactory bulb lies inside the skull, so any device would have to guard against infection, particularly meningitis. And early stimulation experiments show that indiscriminate activation of many receptor pathways at once can produce unpleasant or phantom smells — hardly the boost to quality of life that patients are looking for. Developing a process for inducing specific smells will take trial and error.
“Through machine learning, you could have an implanted electrode array randomly choose certain areas to stimulate, and ask the person what they’re experiencing.”
Eric Holbrook
“Through machine learning, you could have an implanted electrode array randomly choose certain areas to stimulate, and ask the person what they’re experiencing,” Holbrook said. “Over time, you’re getting closer and closer to producing recognizable smells.”
Functional, long-term implanted devices are likely years away, but Holbrook and his colleague Mark Richardson, a neurosurgeon at Mass General, hope to make a significant contribution sooner. Richardson sometimes temporarily implants electrodes into the brains of seizure patients. With patients’ consent, Holbrook and Richardson plan to place electrodes on the surface of the olfactory bulb during Richardson’s surgeries, when the patients are awake, and electrically stimulate areas on the bulb to see if patients perceive a smell without an odor present.
The journey to reliable implants won’t be quick or easy, Holbrook said, but he sees promise for those whose quality of life has suffered from a loss of smell.
“If you talk to someone who had COVID, and everything they eat is bland now because they can’t smell it, they would probably be happy to have coffee versus a strawberry. That would be a huge improvement for them.”
Nine researchers from ETH Zurich have just been awarded Starting Grants from the European Research Council (ERC). This is a positive signal for Zurich as a research location.
Asian nations are today becoming more central to shaping global politics, economics and security. As Asia’s global influence grows, its countries must take responsibility for stability and peace, rather than allowing rivalries to define the century.
This point was underscored during the dialogue between veteran diplomat and Distinguished Fellow at the NUS Asia Research Institute (ARI) Mr Kishore Mahbubani and Singapore’s former foreign minister Mr George Yeo in conjunction with the launch of the book, Can Asians Think of Peace? Essays on Managing Conflict in the Asian Century, on 22 August 2025.
The book, co-edited by Mr Mahbubani and Dr Kesava Chandra Varigonda and Ms Kristen Tang from ARI, compiles 61 essays written by global scholars, policymakers and experts for the Asian Peace Programme (APP) from the time of its launch in July 2020 to December 2024. The essays examine possible areas of conflict in Asia, offering Asia-focused, pragmatic perspectives on conflict management and peacebuilding.
Published in July 2025 as an open-access volume to coincide with APP’s fifth anniversary, the book has already been downloaded 103,000 times and reached #3 on TheStraits Times non-fiction bestseller list, reflecting its broad reach across academia, policy circles and the public.
In his opening address at the launch, ARI Director Professor Tim Bunnell recalled how the APP was conceived in the wake of the 2020 military clashes between Chinese and Indian troops, as there was a recognition of the need for a credible peace initiative that was “Asia-focused, Asia-centred and Asia-led”.
“This book stands as proof of the important work the APP has undertaken in addressing that need, with recent tensions along the Thai-Cambodian border serving as a reminder that such peace-making efforts remain as vital as ever for the region,” said Prof Bunnell.
Speaking at the launch event attended by about 250 guests from academia and policymaking, Mr Yeo lauded APP for demonstrating what he called “a bias for peace”, describing how the book contained a “treasure trove of perspectives” on the region’s most pressing challenges.
He highlighted the critical role of understanding one’s opponents in geopolitics, as effective peacekeeping demands strategic insight and a clear grasp of what motivates the opponents and where their interests lie.
“When we start seeing problems from the other person’s perspective, and we can only do it if we respect the interlocutor, then the possibility of win-win outcomes becomes possible,” Mr Yeo said.
The dangers of ‘demonising opponents’ were also discussed during the dialogue between Mr Yeo and Mr Mahbubani.
“When you demonise the other, you become fearful, you become self-righteous. And out of self-righteousness arises the greatest evil. Those who are evil…believe they are doing right by the people,” said Mr Yeo, adding that this troubling mindset had become increasingly evident, for example, in US-China relations. Rather than allowing fear and self-righteousness to shape interactions, he emphasised the value of building understanding through shared histories and cultures.
Mr Mahbubani pointed to the recent warming of China–India relations, which he described as critical to peace in Asia. Mr Yeo concurred, citing symbolic gestures on both sides: President Xi Jinping’s 2014 visit to Prime Minister Narendra Modi’s home state of Gujarat, where the Buddhist monk Xuanzang once spent time, and Mr Modi’s reciprocal visit to Xi’s hometown of Xi’an, where Xuanzang had returned with Buddhist scriptures.
Mr Yeo drew on a striking metaphor by eminent historian and NUS University Professor Wang Gungwu, who described Singapore as the place where the “mandalas” of India and China overlap.
Extending this imagery, he noted that Southeast Asia is where the spheres of influence of China, India and the West intersect at varying intensities across the 10 ASEAN nations – giving the region a unique vantage point to understand all three. “The work which the APP does, in a sense, can only be done in Southeast Asia, and perhaps only in Singapore, which makes it so special,” he said.
Indeed, it is Mr Mahbubani’s hope that the APP can make a meaningful difference by serving as “a small candle” that illuminates the path to peace in Asia and beyond.
Can Asians Think of Peace? Essays on Managing Conflict in the Asian Century is a culmination of the APP’s work, with its open-access format allowing anyone worldwide to read and cite these essays, amplifying their reach across academia, policy, and public discourse.
Many attempts have been made to harness the power of new artificial intelligence and large language models (LLMs) to try to predict the outcomes of new chemical reactions. These have had limited success, in part because until now they have not been grounded in an understanding of fundamental physical principles, such as the laws of conservation of mass. Now, a team of researchers at MIT has come up with a way of incorporating these physical constraints on a reaction prediction model, and thus greatly improving the accuracy and reliability of its outputs.
The new work was reported Aug. 20 in the journal Nature, in a paper by recent postdoc Joonyoung Joung (now an assistant professor at Kookmin University, South Korea); former software engineer Mun Hong Fong (now at Duke University); chemical engineering graduate student Nicholas Casetti; postdoc Jordan Liles; physics undergraduate student Ne Dassanayake; and senior author Connor Coley, who is the Class of 1957 Career Development Professor in the MIT departments of Chemical Engineering and Electrical Engineering and Computer Science.
“The prediction of reaction outcomes is a very important task,” Joung explains. For example, if you want to make a new drug, “you need to know how to make it. So, this requires us to know what product is likely” to result from a given set of chemical inputs to a reaction. But most previous efforts to carry out such predictions look only at a set of inputs and a set of outputs, without looking at the intermediate steps or considering the constraints of ensuring that no mass is gained or lost in the process, which is not possible in actual reactions.
Joung points out that while large language models such as ChatGPT have been very successful in many areas of research, these models do not provide a way to limit their outputs to physically realistic possibilities, such as by requiring them to adhere to conservation of mass. These models use computational “tokens,” which in this case represent individual atoms, but “if you don’t conserve the tokens, the LLM model starts to make new atoms, or deletes atoms in the reaction.” Instead of being grounded in real scientific understanding, “this is kind of like alchemy,” he says. While many attempts at reaction prediction only look at the final products, “we want to track all the chemicals, and how the chemicals are transformed” throughout the reaction process from start to end, he says.
In order to address the problem, the team made use of a method developed back in the 1970s by chemist Ivar Ugi, which uses a bond-electron matrix to represent the electrons in a reaction. They used this system as the basis for their new program, called FlowER (Flow matching for Electron Redistribution), which allows them to explicitly keep track of all the electrons in the reaction to ensure that none are spuriously added or deleted in the process.
The system uses a matrix to represent the electrons in a reaction, and uses nonzero values to represent bonds or lone electron pairs and zeros to represent a lack thereof. “That helps us to conserve both atoms and electrons at the same time,” says Fong. This representation, he says, was one of the key elements to including mass conservation in their prediction system.
The system they developed is still at an early stage, Coley says. “The system as it stands is a demonstration — a proof of concept that this generative approach of flow matching is very well suited to the task of chemical reaction prediction.” While the team is excited about this promising approach, he says, “we’re aware that it does have specific limitations as far as the breadth of different chemistries that it’s seen.” Although the model was trained using data on more than a million chemical reactions, obtained from a U.S. Patent Office database, those data do not include certain metals and some kinds of catalytic reactions, he says.
“We’re incredibly excited about the fact that we can get such reliable predictions of chemical mechanisms” from the existing system, he says. “It conserves mass, it conserves electrons, but we certainly acknowledge that there’s a lot more expansion and robustness to work on in the coming years as well.”
But even in its present form, which is being made freely available through the online platform GitHub, “we think it will make accurate predictions and be helpful as a tool for assessing reactivity and mapping out reaction pathways,” Coley says. “If we’re looking toward the future of really advancing the state of the art of mechanistic understanding and helping to invent new reactions, we’re not quite there. But we hope this will be a steppingstone toward that.”
“It’s all open source,” says Fong. “The models, the data, all of them are up there,” including a previous dataset developed by Joung that exhaustively lists the mechanistic steps of known reactions. “I think we are one of the pioneering groups making this dataset, and making it available open-source, and making this usable for everyone,” he says.
The FlowER model matches or outperforms existing approaches in finding standard mechanistic pathways, the team says, and makes it possible to generalize to previously unseen reaction types. They say the model could potentially be relevant for predicting reactions for medicinal chemistry, materials discovery, combustion, atmospheric chemistry, and electrochemical systems.
In their comparisons with existing reaction prediction systems, Coley says, “using the architecture choices that we’ve made, we get this massive increase in validity and conservation, and we get a matching or a little bit better accuracy in terms of performance.”
He adds that “what’s unique about our approach is that while we are using these textbook understandings of mechanisms to generate this dataset, we’re anchoring the reactants and products of the overall reaction in experimentally validated data from the patent literature.” They are inferring the underlying mechanisms, he says, rather than just making them up. “We’re imputing them from experimental data, and that’s not something that has been done and shared at this kind of scale before.”
The next step, he says, is “we are quite interested in expanding the model’s understanding of metals and catalytic cycles. We’ve just scratched the surface in this first paper,” and most of the reactions included so far don’t include metals or catalysts, “so that’s a direction we’re quite interested in.”
In the long term, he says, “a lot of the excitement is in using this kind of system to help discover new complex reactions and help elucidate new mechanisms. I think that the long-term potential impact is big, but this is of course just a first step.”
The work was supported by the Machine Learning for Pharmaceutical Discovery and Synthesis consortium and the National Science Foundation.
The FlowER (Flow matching for Electron Redistribution) system allows a researcher to explicitly keep track of all the electrons in a reaction to ensure that none are spuriously added or deleted in the process of predicting the outcome of a chemical reaction.
Many attempts have been made to harness the power of new artificial intelligence and large language models (LLMs) to try to predict the outcomes of new chemical reactions. These have had limited success, in part because until now they have not been grounded in an understanding of fundamental physical principles, such as the laws of conservation of mass. Now, a team of researchers at MIT has come up with a way of incorporating these physical constraints on a reaction prediction model, and thus greatly improving the accuracy and reliability of its outputs.
The new work was reported Aug. 20 in the journal Nature, in a paper by recent postdoc Joonyoung Joung (now an assistant professor at Kookmin University, South Korea); former software engineer Mun Hong Fong (now at Duke University); chemical engineering graduate student Nicholas Casetti; postdoc Jordan Liles; physics undergraduate student Ne Dassanayake; and senior author Connor Coley, who is the Class of 1957 Career Development Professor in the MIT departments of Chemical Engineering and Electrical Engineering and Computer Science.
“The prediction of reaction outcomes is a very important task,” Joung explains. For example, if you want to make a new drug, “you need to know how to make it. So, this requires us to know what product is likely” to result from a given set of chemical inputs to a reaction. But most previous efforts to carry out such predictions look only at a set of inputs and a set of outputs, without looking at the intermediate steps or considering the constraints of ensuring that no mass is gained or lost in the process, which is not possible in actual reactions.
Joung points out that while large language models such as ChatGPT have been very successful in many areas of research, these models do not provide a way to limit their outputs to physically realistic possibilities, such as by requiring them to adhere to conservation of mass. These models use computational “tokens,” which in this case represent individual atoms, but “if you don’t conserve the tokens, the LLM model starts to make new atoms, or deletes atoms in the reaction.” Instead of being grounded in real scientific understanding, “this is kind of like alchemy,” he says. While many attempts at reaction prediction only look at the final products, “we want to track all the chemicals, and how the chemicals are transformed” throughout the reaction process from start to end, he says.
In order to address the problem, the team made use of a method developed back in the 1970s by chemist Ivar Ugi, which uses a bond-electron matrix to represent the electrons in a reaction. They used this system as the basis for their new program, called FlowER (Flow matching for Electron Redistribution), which allows them to explicitly keep track of all the electrons in the reaction to ensure that none are spuriously added or deleted in the process.
The system uses a matrix to represent the electrons in a reaction, and uses nonzero values to represent bonds or lone electron pairs and zeros to represent a lack thereof. “That helps us to conserve both atoms and electrons at the same time,” says Fong. This representation, he says, was one of the key elements to including mass conservation in their prediction system.
The system they developed is still at an early stage, Coley says. “The system as it stands is a demonstration — a proof of concept that this generative approach of flow matching is very well suited to the task of chemical reaction prediction.” While the team is excited about this promising approach, he says, “we’re aware that it does have specific limitations as far as the breadth of different chemistries that it’s seen.” Although the model was trained using data on more than a million chemical reactions, obtained from a U.S. Patent Office database, those data do not include certain metals and some kinds of catalytic reactions, he says.
“We’re incredibly excited about the fact that we can get such reliable predictions of chemical mechanisms” from the existing system, he says. “It conserves mass, it conserves electrons, but we certainly acknowledge that there’s a lot more expansion and robustness to work on in the coming years as well.”
But even in its present form, which is being made freely available through the online platform GitHub, “we think it will make accurate predictions and be helpful as a tool for assessing reactivity and mapping out reaction pathways,” Coley says. “If we’re looking toward the future of really advancing the state of the art of mechanistic understanding and helping to invent new reactions, we’re not quite there. But we hope this will be a steppingstone toward that.”
“It’s all open source,” says Fong. “The models, the data, all of them are up there,” including a previous dataset developed by Joung that exhaustively lists the mechanistic steps of known reactions. “I think we are one of the pioneering groups making this dataset, and making it available open-source, and making this usable for everyone,” he says.
The FlowER model matches or outperforms existing approaches in finding standard mechanistic pathways, the team says, and makes it possible to generalize to previously unseen reaction types. They say the model could potentially be relevant for predicting reactions for medicinal chemistry, materials discovery, combustion, atmospheric chemistry, and electrochemical systems.
In their comparisons with existing reaction prediction systems, Coley says, “using the architecture choices that we’ve made, we get this massive increase in validity and conservation, and we get a matching or a little bit better accuracy in terms of performance.”
He adds that “what’s unique about our approach is that while we are using these textbook understandings of mechanisms to generate this dataset, we’re anchoring the reactants and products of the overall reaction in experimentally validated data from the patent literature.” They are inferring the underlying mechanisms, he says, rather than just making them up. “We’re imputing them from experimental data, and that’s not something that has been done and shared at this kind of scale before.”
The next step, he says, is “we are quite interested in expanding the model’s understanding of metals and catalytic cycles. We’ve just scratched the surface in this first paper,” and most of the reactions included so far don’t include metals or catalysts, “so that’s a direction we’re quite interested in.”
In the long term, he says, “a lot of the excitement is in using this kind of system to help discover new complex reactions and help elucidate new mechanisms. I think that the long-term potential impact is big, but this is of course just a first step.”
The work was supported by the Machine Learning for Pharmaceutical Discovery and Synthesis consortium and the National Science Foundation.
The FlowER (Flow matching for Electron Redistribution) system allows a researcher to explicitly keep track of all the electrons in a reaction to ensure that none are spuriously added or deleted in the process of predicting the outcome of a chemical reaction.
Best-selling writer and technology blogger Cory Doctorow will make the A.D. White Professor-at-Large program’s second dual-campus visit, ending his week at Cornell Tech in New York City. Four other professors will visit Cornell this fall.
Science fiction author, activist and journalist Cory Doctorow will visit Cornell Sept. 11-19 as an A.D. White Professor at Large, taking part in several events on campus and in the community.
Facing life-or-death call on who gets liver transplants
Surgeons, medical professionals apply risk calculus that gets even more complex for patients with drinking problems
Anna Lamb
Harvard Staff Writer
5 min read
A series exploring how risk shapes our decisions.
Surgeons face a tough question when it comes to liver transplants. Donor organs are in short supply, and the decision of who will receive one is a life-or-death call. It’s one that can be made even more complicated if a patient has suffered from alcohol-use disorder.
“As doctors, we always want to save lives — especially in this setting and in those patients who are very young,” said Wei Zhang, a transplant hepatologist at Mass General Hospital. “But we also have to balance that the organs are very sparse. The question is: If a patient undergoes a liver transplantation but dies within the first five years of liver transplantation, was it worth it?”
The risk calculus for physicians like Zhang and other healthcare professionals is thorny and complex and involves evaluations of a patient’s medical condition, support network, and personal history and knowledge of alcohol-use disorder, which is associated with higher incidence of liver disease.
The stakes are high. Patients with decompensated liver disease, or what commonly has been referred to as end-stage liver disease, have drastically shortened life expectancies without transplantation. In one study, patients in this stage who developed complications lived only two years after diagnosis.
“If we know a patient is going to relapse after liver transplant, the evidence is that the chance of them developing recurrent cirrhosis in three years is about 50 percent and the chance of dying from the recurrent liver disease in five years is about 50 percent,” Zhang said. “We do a lot of interventions to prevent them from going back to drinking and improve their quality of life.”
In addition to his work as a hepatologist, and as an assistant professor of medicine at Harvard Medical School, Zhang is also the director of MGH’s Alcohol-Associated Liver Disease Clinic where he has helped guide unique post-transplant programs aimed at protecting the long-term health of patients and their new organs.
“We have to isolate ourselves from the decision that we make, because it’s not my decision. It’s actually a consensus from the entire committee.”
“As doctors, we have to be realistic,” he said. “We have to isolate ourselves from the decision that we make, because it’s not my decision. It’s actually a consensus from the entire committee. I do sometimes question myself … if I pushed harder, could I give the patient a chance of getting a liver transplantation? But at the end of the day, it’s a team effort.”
And, Zhang added, he finds solace in knowing that if one patient isn’t a good fit for transplantation, the organ will help save another life.
“I also understand that if a patient receives an organ, it means that another patient is not able to receive the organ,” he said. “So when I think of that, I find a little bit of comfort.”
In the last decade, the field of transplant hepatology has changed drastically. In the not-so-distant past, all patients coming into the hospital with a failing liver and any history of alcohol abuse were denied life-saving surgery.
“When I was doing my residency, most of those patients did not have any chance of being evaluated for liver transplantation,” Zhang said.
Now, he added, there are still quite a few hurdles that patients need to clear to be approved for transplantation. But there’s hope — especially for those with strong support at home.
“If a patient has no prior knowledge of their drinking causing the liver disease, has no known liver disease, and they come to the hospital actively drinking, there are two different criteria that we want those patients to meet,” Zhang said. “One is called medical criteria. The other one is called psychosocial criteria.”
Zhang said the first step in assessing a patient as a candidate for a liver transplant is to rule out any underlying health conditions such as heart and lung issues that may cause issues during surgery.
The second is to evaluate their likelihood of making behavior changes to protect their health. This step, Zhang said, usually takes a multidisciplinary team, including a social worker, an addiction specialist, and a hepatologist, to make a consensus.
“Some of the factors that we look at is if a patient has insights, meaning, does the patient think that the liver disease is caused by alcohol?”
“Some of the factors that we look at is if a patient has insights, meaning, does the patient think that the liver disease is caused by alcohol?” Zhang said. “There are patients who, for various reasons — one of them is probably stigma — don’t acknowledge that the liver disease is caused by alcohol. The risk is that if they get a liver transplantation, and don’t think they need treatments, they may relapse.”
The other piece of psychosocial criteria, Zhang said, is social support. This includes having strong family ties, stable housing, and the overall ability to seek support after surgery.
“Then those patients would be considered as good candidates with acceptable risk for post-liver-transplant relapse, and we can move on for a liver transplant evaluation,” Zhang said.
Binge drinking and high-risk drinking is on the rise. According to Zhang, younger patients and more female patients have been increasingly suffering from severe consequences like liver failure and cirrhosis. The youngest patient with cirrhosis he’s seen, Zhang said, was 22.
Such concerning trends suggest physicians will face the risks of burnout over having to make more high-stakes decisions. Zhang says for him, all of that just comes with the territory.
“It’s not an easy job, but I do love it,” he said. “The most important piece of this is that if I save lives, I’m happy.”
Cornell University alumnus Scott Belsky ‘02, partner at entertainment studio A24 and founder of A24 Labs and Behance, has joined the Cornell Tech Council.
Best-selling writer and technology blogger Cory Doctorow will make the A.D. White Professor-at-Large program’s second dual-campus visit, ending his week at Cornell Tech in New York City. Four other professors will visit Cornell this fall.
Students at ETH Zurich have developed a laser power bed fusion machine that follows a circular tool path to print round components, thereby being able to process multiple metals at once. The system significantly reduces manufacturing time and opens up new possibilities for aerospace and industry. ETH has filed a patent application for the machine.
Synthetic data are artificially generated by algorithms to mimic the statistical properties of actual data, without containing any information from real-world sources. While concrete numbers are hard to pin down, some estimates suggest that more than 60 percent of data used for AI applications in 2024 was synthetic, and this figure is expected to grow across industries.
Because synthetic data don’t contain real-world information, they hold the promise of safeguarding privacy while reducing the cost and increasing the speed at which new AI models are developed. But using synthetic data requires careful evaluation, planning, and checks and balances to prevent loss of performance when AI models are deployed.
To unpack some pros and cons of using synthetic data, MIT News spoke with Kalyan Veeramachaneni, a principal research scientist in the Laboratory for Information and Decision Systems and co-founder of DataCebo whose open-core platform, the Synthetic Data Vault, helpsusers generate and test synthetic data.
Q:How are synthetic data created?
A: Synthetic data are algorithmically generated but do not come from a real situation. Their value lies in their statistical similarity to real data. If we’re talking about language, for instance, synthetic data look very much as if a human had written those sentences. While researchers have created synthetic data for a long time, what has changed in the past few years is our ability to build generative models out of data and use them to create realistic synthetic data. We can take a little bit of real data and build a generative model from that, which we can use to create as much synthetic data as we want. Plus, the model creates synthetic data in a way that captures all the underlying rules and infinite patterns that exist in the real data.
There are essentially four different data modalities: language, video or images, audio, and tabular data. All four of them have slightly different ways of building the generative models to create synthetic data. An LLM, for instance, is nothing but a generative model from which you are sampling synthetic data when you ask it a question.
A lot of language and image data are publicly available on the internet. But tabular data, which is the data collected when we interact with physical and social systems, is often locked up behind enterprise firewalls. Much of it is sensitive or private, such as customer transactions stored by a bank. For this type of data, platforms like the Synthetic Data Vault provide software that can be used to build generative models. Those models then create synthetic data that preserve customer privacy and can be shared more widely.
One powerful thing about this generative modeling approach for synthesizing data is that enterprises can now build a customized, local model for their own data. Generative AI automates what used to be a manual process.
Q: What are some benefits of using synthetic data, and which use-cases and applications are they particularly well-suited for?
A: One fundamental application which has grown tremendously over the past decade is using synthetic data to test software applications. There is data-driven logic behind many software applications, so you need data to test that software and its functionality. In the past, people have resorted to manually generating data, but now we can use generative models to create as much data as we need.
Users can also create specific data for application testing. Say I work for an e-commerce company. I can generate synthetic data that mimics real customers who live in Ohio and made transactions pertaining to one particular product in February or March.
Because synthetic data aren’t drawn from real situations, they are also privacy-preserving. One of the biggest problems in software testing has been getting access to sensitive real data for testing software in non-production environments, due to privacy concerns. Another immediate benefit is in performance testing. You can create a billion transactions from a generative model and test how fast your system can process them.
Another application where synthetic data hold a lot of promise is in training machine-learning models. Sometimes, we want an AI model to help us predict an event that is less frequent. A bank may want to use an AI model to predict fraudulent transactions, but there may be too few real examples to train a model that can identify fraud accurately. Synthetic data provide data augmentation — additional data examples that are similar to the real data. These can significantly improve the accuracy of AI models.
Also, sometimes users don’t have time or the financial resources to collect all the data. For instance, collecting data about customer intent would require conducting many surveys. If you end up with limited data and then try to train a model, it won’t perform well. You can augment by adding synthetic data to train those models better.
Q. What are some of the risks or potential pitfalls of using synthetic data, and are there steps users can take to prevent or mitigate those problems?
A. One of the biggest questions people often have in their mind is, if the data are synthetically created, why should I trust them? Determining whether you can trust the data often comes down to evaluating the overall system where you are using them.
There are a lot of aspects of synthetic data we have been able to evaluate for a long time. For instance, there are existing methods to measure how close synthetic data are to real data, and we can measure their quality and whether they preserve privacy. But there are other important considerations if you are using those synthetic data to train a machine-learning model for a new use case. How would you know the data are going to lead to models that still make valid conclusions?
New efficacy metrics are emerging, and the emphasis is now on efficacy for a particular task. You must really dig into your workflow to ensure the synthetic data you add to the system still allow you to draw valid conclusions. That is something that must be done carefully on an application-by-application basis.
Bias can also be an issue. Since it is created from a small amount of real data, the same bias that exists in the real data can carry over into the synthetic data. Just like with real data, you would need to purposefully make sure the bias is removed through different sampling techniques, which can create balanced datasets. It takes some careful planning, but you can calibrate the data generation to prevent the proliferation of bias.
To help with the evaluation process, our group created the Synthetic Data Metrics Library. We worried that people would use synthetic data in their environment and it would give different conclusions in the real world. We created a metrics and evaluation library to ensure checks and balances. The machine learning community has faced a lot of challenges in ensuring models can generalize to new situations. The use of synthetic data adds a whole new dimension to that problem.
I expect that the old systems of working with data, whether to build software applications, answer analytical questions, or train models, will dramatically change as we get more sophisticated at building these generative models. A lot of things we have never been able to do before will now be possible.
If your hand lotion is a bit runnier than usual coming out of the bottle, it might have something to do with the goop’s “mechanical memory.”
Soft gels and lotions are made by mixing ingredients until they form a stable and uniform substance. But even after a gel has set, it can hold onto “memories,” or residual stress, from the mixing process. Over time, the material can give in to these embedded stresses and slide back into its former, premixed state. Mechanical memory is, in part, why hand lotion separates and gets runny over time.
Now, an MIT engineer has devised a simple way to measure the degree of residual stress in soft materials after they have been mixed, and found that common products like hair gel and shaving cream have longer mechanical memories, holding onto residual stresses for longer periods of time than manufacturers might have assumed.
In a study appearing today in Physical Review Letters, Crystal Owens, a postdoc in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), presents a new protocol for measuring residual stress in soft, gel-like materials, using a standard benchtop rheometer.
Applying this protocol to everyday soft materials, Owens found that if a gel is made by mixing it in one direction, once it settles into a stable and uniform state, it effectively holds onto the memory of the direction in which it is mixed. Even after several days, the gel will hold some internal stress that, if released, will cause the gel to shift in the direction opposite to how it was initially mixed, reverting back to its earlier state.
“This is one reason different batches of cosmetics or food behave differently even if they underwent ‘identical’ manufacturing,” Owens says. “Understanding and measuring these hidden stresses during processing could help manufacturers design better products that last longer and perform more predictably.”
A soft glass
Hand lotion, hair gel, and shaving cream all fall under the category of “soft glassy materials” — materials that exhibit properties of both solids and liquids.
“Anything you can pour into your hand and it forms a soft mound is going to be considered a soft glass,” Owens explains. “In materials science, it’s considered a soft version of something that has the same amorphous structure as glass.”
In other words, a soft glassy material is a strange amalgam of a solid and a liquid. It can be poured out like a liquid, and it can hold its shape like a solid. Once they are made, these materials exist in a delicate balance between solid and liquid. And Owens wondered: For how long?
“What happens to these materials after very long times? Do they finally relax or do they never relax?” Owens says. “From a physics perspective, that’s a very interesting concept: What is the essential state of these materials?”
Twist and hold
In the manufacturing of soft glassy materials such as hair gel and shampoo, ingredients are first mixed into a uniform product. Quality control engineers then let a sample sit for about a minute — a period of time that they assume is enough to allow any residual stresses from the mixing process dissipate. In that time, the material should settle into a steady, stable state, ready for use.
But Owens suspected that the materials may hold some degree of stress from the production process long after they’ve appeared to settle.
“Residual stress is a low level of stress that’s trapped inside a material after it’s come to a steady state,” Owens says. “This sort of stress has not been measured in these sorts of materials.”
To test her hypothesis, she carried out experiments with two common soft glassy materials: hair gel and shaving cream. She made measurements of each material in a rheometer — an instrument consisting of two rotating plates that can twist and press a material together at precisely controlled pressures and forces that relate directly to the material’s internal stresses and strains.
In her experiments, she placed each material in the rheometer and spun the instrument’s top plate around to mix the material. Then she let the material settle, and then settle some more — much longer than one minute. During this time, she observed the amount of force it took the rheometer to hold the material in place. She reasoned that the greater the rheometer’s force, the more it must be counteracting any stress within the material that would otherwise cause it to shift out of its current state.
Over multiple experiments using this new protocol, Owens found that different types of soft glassy materials held a significant amount of residual stress, long after most researchers would assume the stress had dissipated. What’s more, she found that the degree of stress that a material retained was a reflection of the direction in which it was initially mixed, and when it was mixed.
“The material can effectively ‘remember’ which direction it was mixed, and how long ago,” Owens says. “And it turns out they hold this memory of their past, a lot longer than we used to think.”
In addition to the protocol she has developed to measure residual stress, Owens has developed a model to estimate how a material will change over time, given the degree of residual stress that it holds. Using this model, she says scientists might design materials with “short-term memory,” or very little residual stress, such that they remain stable over longer periods.
One material where she sees room for such improvement is asphalt — a substance that is first mixed, then poured in molten form over a surface where it then cools and settles over time. She suspects that residual stresses from the mixing of asphalt may contribute to cracks forming in pavement over time. Reducing these stresses at the start of the process could lead to longer-lasting, more resilient roads.
“People are inventing new types of asphalt all the time to be more eco-friendly, and all of these will have different levels of residual stress that will need some control,” she says. “There’s plenty of room to explore.”
This research was supported, in part, by MIT’s Postdoctoral Fellowship for Engineering Excellence and an MIT Mathworks Fellowship.
Sixty-three years after their formal separation, NUS and Universiti Malaya (UM) continue to honour their shared origins and enduring relationship.
Late last month, the two universities came together in Ipoh, Malaysia, to celebrate their social and academic ties through the 54th edition of the UM-NUS Inter-University Tunku Chancellor Golf Tournament, as well as the concurrent UM-NUS Joint Academic Symposium on Precision Health.
To mark the occasion, a royal dinner was held on 25 August 2025 at the Istana Iskandariah, Kuala Kangsar. The event was hosted by the UM Chancellor, His Royal Highness (HRH) Sultan Dr Nazrin Muizzuddin Shah, Sultan of Perak, and the Raja Permaisuri of Perak, Her Royal Highness Tuanku Zara Salim. It was attended by the NUS Chancellor and President of the Republic of Singapore, Mr Tharman Shanmugaratnam, and his spouse, Mrs Jane Ittogi Shanmugaratnam, as well as Singapore’s High Commissioner to Malaysia, Mr Vanu Gopala Menon (Business ’85) and his spouse, Mrs Jayanthi Menon (Law ’89). Other members of the NUS delegation included Mr Hsieh Fu Hua, Chairman of the NUS Board of Trustees; Professor Tan Eng Chye (Science ’85), NUS President; and Ms Ovidia Lim-Rajaram (Arts & Social Sciences ’89), NUS Chief Alumni Officer, alongside participating academics and golfers.
In remarks made at the dinner, Professor Tan stated, “This is the second year we have paired the tournament with an academic symposium, symbolising the diverse and multilayered nature of the UM-NUS relationship: one that values both personal connection and intellectual exchange, each deepening and enriching the other.”
The sentiment was echoed by the Vice-Chancellor of UM, Professor Dato' Seri Ir. Dr Noor Azuan bin Abu Osman, who observed, “What gives universities their unique power is that we do not compete for resources in the same way states or corporations do. Instead, we compete by collaborating, and we grow stronger by sharing.”
Partners in progress
This commitment to collaboration and knowledge exchange was evident at the joint academic symposium on 26 August, themed around precision health. The event featured presentations from 10 distinguished researchers and professors from both UM and NUS, with speakers highlighting the ongoing transformation of healthcare from a one-size-fits-all model to one driven by personalised medicine and data-driven solutions.
“It’s always meaningful when we can come together to leverage our complementary strengths and address shared challenges,” said Professor Tai E Shyong from the NUS Yong Loo Lin School of Medicine. “UM and NUS have tremendous potential to advance medical research focused on Asian populations, with events like this providing a valuable platform to exchange ideas, foster collaboration, and drive progress in ways that ultimately improve the healthcare outcomes of our communities.”
Friendship in full swing
Complementing the joint symposium, the UM-NUS Inter-University Tunku Chancellor Golf Tournament took place on 26 August 2025 at the Royal Perak Golf Club, following a social round held the previous day at the Meru Valley Golf & Country Club. More than 100 golfers—including faculty, staff and alumni from both institutions—participated in the event.
First held in 1968 as a symbol of the strong relationship between senior leaders of UM and NUS, the longstanding tournament is hosted alternately by the two universities, strengthening ties across their academic and alumni communities.
This year, the UM team emerged victorious, capping off an eventful two days reaffirming more than six decades of friendship and collaboration.
Photos by Veasey Conway/Harvard Staff Photographer
Jason Sweet
Harvard Staff Writer
3 min read
Garber points to value of argument in sustaining communities at first Morning Prayers of school year
Harvard President Alan Garber urged students and faculty to embrace disagreement as necessary to the vitality and advancement of meaningful institutions at the first Morning Prayers of the academic year.
In a packed Appleton Chapel, Garber began by reading from the successful 1886 petition to end compulsory attendance at Morning Prayers — a requirement that had been in place for 250 years.
The petitioners vigorously argued their case. But they also allowed that the ritual was not without value, possessing the potential to “bring the passing and casual under the shadow of the eternal; to make a man feel that amid the confusion of his hurried life, he can lay hold of an unvarying, underlying truth.”
“That was a tall order in 1886,” remarked Garber, “taller still in 2025, especially on the first morning of what will likely be a very challenging year marked by events outside our control.”
Nearly a century and a half later, Garber asked, “What truth might we lay hold of now?”
To answer the question, Garber reflected on the “vast, wonderful, and pervasive sense of curiosity” he observed in his 13 years as the University’s provost and chief academic officer.
“I witnessed many moments of joy and celebration punctuated by new questions, questions large and small, questions that seemed small but turned out to be large, questions too numerous to answer in a single career or even a lifetime,” he said.
Asking and reckoning with these questions was not always comfortable.
“Though our efforts often lead to affirmation and agreement,” he said, “they begin and proceed with confrontation and debate, fueled by a shared desire for deeper and richer understanding.”
Though individual work is important, he said, “success nearly always depends on a supportive but critical community.”
The first Morning Prayers of the academic year.
Garber shakes hands with attendees.
He cited his religion, Judaism, as an institution strengthened by disagreement. Central to Judaism is not just the Torah but also the Talmud — “an era-crossing record of ongoing rabbinical debate over the meaning of the Torah and its application to every facet of life.”
Garber described his own experience studying the Talmud and how the process of communal discovery — and argument — “helped sustain a religion and identity for millennia” after the destruction of the Temple and exile.
Ultimately, Garber argued that good-faith disagreements are fundamental to any strong community.
Just as institutions “stir and strengthen feelings of connection,” they also “challenge us to resist our inclinations, to confront our assumptions, and to develop the capacity to explore different views with the seriousness they deserve.”
At a time when the role of institutions such as Harvard has been thrust into the national discourse, Garber stressed the importance of remaining committed to their highest standards.
For him, that means leaving room for internal argument and discord — all with much bigger goals in mind.
“May this year bring opportunities for us to affirm and fulfill the commitment to veritas that unites and strengthens us as an institution and as a community,” he concluded. “And, as we argue, discuss, and work together under the shadow of the eternal, may our contributions to understanding — and the progress they enable — make our nation and the world a better place.”
Caroline Uhler is an Andrew (1956) and Erna Viterbi Professor of Engineering at MIT; a professor of electrical engineering and computer science in the Institute for Data, Science, and Society (IDSS); and director of the Eric and Wendy Schmidt Center at the Broad Institute of MIT and Harvard, where she is also a core institute and scientific leadership team member.
Uhler is interested in all the methods by which scientists can uncover causality in biological systems, ranging from causal discovery on observed variables to causal feature learning and representation learning. In this interview, she discusses machine learning in biology, areas that are ripe for problem-solving, and cutting-edge research coming out of the Schmidt Center.
Q: The Eric and Wendy Schmidt Center has four distinct areas of focus structured around four natural levels of biological organization: proteins, cells, tissues, and organisms. What, within the current landscape of machine learning, makes now the right time to work on these specific problem classes?
A: Biology and medicine are currently undergoing a “data revolution.” The availability of large-scale, diverse datasets — ranging from genomics and multi-omics to high-resolution imaging and electronic health records — makes this an opportune time. Inexpensive and accurate DNA sequencing is a reality, advanced molecular imaging has become routine, and single cell genomics is allowing the profiling of millions of cells. These innovations — and the massive datasets they produce — have brought us to the threshold of a new era in biology, one where we will be able to move beyond characterizing the units of life (such as all proteins, genes, and cell types) to understanding the `programs of life’, such as the logic of gene circuits and cell-cell communication that underlies tissue patterning and the molecular mechanisms that underlie the genotype-phenotype map.
At the same time, in the past decade, machine learning has seen remarkable progress with models like BERT, GPT-3, and ChatGPT demonstrating advanced capabilities in text understanding and generation, while vision transformers and multimodal models like CLIP have achieved human-level performance in image-related tasks. These breakthroughs provide powerful architectural blueprints and training strategies that can be adapted to biological data. For instance, transformers can model genomic sequences similar to language, and vision models can analyze medical and microscopy images.
Importantly, biology is poised to be not just a beneficiary of machine learning, but also a significant source of inspiration for new ML research. Much like agriculture and breeding spurred modern statistics, biology has the potential to inspire new and perhaps even more profound avenues of ML research. Unlike fields such as recommender systems and internet advertising, where there are no natural laws to discover and predictive accuracy is the ultimate measure of value, in biology, phenomena are physically interpretable, and causal mechanisms are the ultimate goal. Additionally, biology boasts genetic and chemical tools that enable perturbational screens on an unparalleled scale compared to other fields. These combined features make biology uniquely suited to both benefit greatly from ML and serve as a profound wellspring of inspiration for it.
Q: Taking a somewhat different tack, what problems in biology are still really resistant to our current tool set? Are there areas, perhaps specific challenges in disease or in wellness, which you feel are ripe for problem-solving?
A: Machine learning has demonstrated remarkable success in predictive tasks across domains such as image classification, natural language processing, and clinical risk modeling. However, in the biological sciences, predictive accuracy is often insufficient. The fundamental questions in these fields are inherently causal: How does a perturbation to a specific gene or pathway affect downstream cellular processes? What is the mechanism by which an intervention leads to a phenotypic change? Traditional machine learning models, which are primarily optimized for capturing statistical associations in observational data, often fail to answer such interventional queries.There is a strong need for biology and medicine to also inspire new foundational developments in machine learning.
The field is now equipped with high-throughput perturbation technologies — such as pooled CRISPR screens, single-cell transcriptomics, and spatial profiling — that generate rich datasets under systematic interventions. These data modalities naturally call for the development of models that go beyond pattern recognition to support causal inference, active experimental design, and representation learning in settings with complex, structured latent variables. From a mathematical perspective, this requires tackling core questions of identifiability, sample efficiency, and the integration of combinatorial, geometric, and probabilistic tools. I believe that addressing these challenges will not only unlock new insights into the mechanisms of cellular systems, but also push the theoretical boundaries of machine learning.
With respect to foundation models, a consensus in the field is that we are still far from creating a holistic foundation model for biology across scales, similar to what ChatGPT represents in the language domain — a sort of digital organism capable of simulating all biological phenomena. While new foundation models emerge almost weekly, these models have thus far been specialized for a specific scale and question, and focus on one or a few modalities.
Significant progress has been made in predicting protein structures from their sequences. This success has highlighted the importance of iterative machine learning challenges, such as CASP (critical assessment of structure prediction), which have been instrumental in benchmarking state-of-the-art algorithms for protein structure prediction and driving their improvement.
The Schmidt Center is organizing challenges to increase awareness in the ML field and make progress in the development of methods to solve causal prediction problems that are so critical for the biomedical sciences. With the increasing availability of single-gene perturbation data at the single-cell level, I believe predicting the effect of single or combinatorial perturbations, and which perturbations could drive a desired phenotype, are solvable problems. With our Cell Perturbation Prediction Challenge (CPPC), we aim to provide the means to objectively test and benchmark algorithms for predicting the effect of new perturbations.
Another area where the field has made remarkable strides is disease diagnostic and patient triage. Machine learning algorithms can integrate different sources of patient information (data modalities), generate missing modalities, identify patterns that may be difficult for us to detect, and help stratify patients based on their disease risk. While we must remain cautious about potential biases in model predictions, the danger of models learning shortcuts instead of true correlations, and the risk of automation bias in clinical decision-making, I believe this is an area where machine learning is already having a significant impact.
Q: Let’s talk about some of the headlines coming out of the Schmidt Center recently. What current research do you think people should be particularly excited about, and why?
A: In collaboration with Dr. Fei Chen at the Broad Institute, we have recently developed a method for the prediction of unseen proteins’ subcellular location, called PUPS. Many existing methods can only make predictions based on the specific protein and cell data on which they were trained. PUPS, however, combines a protein language model with an image in-painting model to utilize both protein sequences and cellular images. We demonstrate that the protein sequence input enables generalization to unseen proteins, and the cellular image input captures single-cell variability, enabling cell-type-specific predictions. The model learns how relevant each amino acid residue is for the predicted sub-cellular localization, and it can predict changes in localization due to mutations in the protein sequences. Since proteins’ function is strictly related to their subcellular localization, our predictions could provide insights into potential mechanisms of disease. In the future, we aim to extend this method to predict the localization of multiple proteins in a cell and possibly understand protein-protein interactions.
Together with Professor G.V. Shivashankar, a long-time collaborator at ETH Zürich, we have previously shown how simple images of cells stained with fluorescent DNA-intercalating dyes to label the chromatin can yield a lot of information about the state and fate of a cell in health and disease, when combined with machine learning algorithms. Recently, we have furthered this observation and proved the deep link between chromatin organization and gene regulation by developing Image2Reg, a method that enables the prediction of unseen genetically or chemically perturbed genes from chromatin images. Image2Reg utilizes convolutional neural networks to learn an informative representation of the chromatin images of perturbed cells. It also employs a graph convolutional network to create a gene embedding that captures the regulatory effects of genes based on protein-protein interaction data, integrated with cell-type-specific transcriptomic data. Finally, it learns a map between the resulting physical and biochemical representation of cells, allowing us to predict the perturbed gene modules based on chromatin images.
Furthermore, we recently finalized the development of a method for predicting the outcomes of unseen combinatorial gene perturbations and identifying the types of interactions occurring between the perturbed genes. MORPH can guide the design of the most informative perturbations for lab-in-a-loop experiments. Furthermore, the attention-based framework provably enables our method to identify causal relations among the genes, providing insights into the underlying gene regulatory programs. Finally, thanks to its modular structure, we can apply MORPH to perturbation data measured in various modalities, including not only transcriptomics, but also imaging. We are very excited about the potential of this method to enable the efficient exploration of the perturbation space to advance our understanding of cellular programs by bridging causal theory to important applications, with implications for both basic research and therapeutic applications.
“The current landscape of machine learning presents a unique opportunity to address problems across different levels of biological organization, from proteins to organisms, due to a data revolution in biology and significant advancements in AI,” says Caroline Uhler.
One in every eight people — 970 million globally — live with mental illness, according to the World Health Organization, with depression and anxiety being the most common mental health conditions worldwide. Existing therapies for complex psychiatric disorders like depression, anxiety, and schizophrenia have limitations, and federal funding to address these shortcomings is growing increasingly uncertain.
Patricia and James Poitras ’63 have committed $8 million to the Poitras Center for Psychiatric Disorders Research to launch pioneering research initiatives aimed at uncovering the brain basis of major mental illness and accelerating the development of novel treatments.
“Federal funding rarely supports the kind of bold, early-stage research that has the potential to transform our understanding of psychiatric illness. Pat and I want to help fill that gap — giving researchers the freedom to follow their most promising leads, even when the path forward isn’t guaranteed,” says James Poitras, who is chair of the McGovern Institute for Brain Research board.
Their latest gift builds upon their legacy of philanthropic support for psychiatric disorders research at MIT, which now exceeds $46 million.
“With deep gratitude for Jim and Pat’s visionary support, we are eager to launch a bold set of studies aimed at unraveling the neural and cognitive underpinnings of major mental illnesses,” says Professor Robert Desimone, director of the McGovern Institute, home to the Poitras Center. “Together, these projects represent a powerful step toward transforming how we understand and treat mental illness.”
A legacy of support
Soon after joining the McGovern Institute Leadership Board in 2006, the Poitrases made a $20 million commitment to establish the Poitras Center for Psychiatric Disorders Research at MIT. The center’s goal, to improve human health by addressing the root causes of complex psychiatric disorders, is deeply personal to them both.
“We had decided many years ago that our philanthropic efforts would be directed towards psychiatric research. We could not have imagined then that this perfect synergy between research at MIT’s McGovern Institute and our own philanthropic goals would develop,” recalls Patricia.
The center supports research at the McGovern Institute and collaborative projects with institutions such as the Broad Institute of MIT and Harvard, McLean Hospital, Mass General Brigham, and other clinical research centers. Since its establishment in 2007, the center has enabled advances in psychiatric research including the development of a machine learning “risk calculator” for bipolar disorder, the use of brain imaging to predict treatment outcomes for anxiety, and studies demonstrating that mindfulness can improve mental health in adolescents.
For the past decade, the Poitrases have also fueled breakthroughs in the lab of McGovern investigator and MIT Professor Feng Zhang, backing the invention of powerful CRISPR systems and other molecular tools that are transforming biology and medicine. Their support has enabled the Zhang team to engineer new delivery vehicles for gene therapy, including vehicles capable of carrying genetic payloads that were once out of reach. The lab has also advanced innovative RNA-guided gene engineering tools such as NovaIscB, published in Nature Biotechnology in May 2025. These revolutionary genome editing and delivery technologies hold promise for the next generation of therapies needed for serious psychiatric illness.
In addition to fueling research in the center, the Poitras family has gifted two endowed professorships — the James and Patricia Poitras Professor of Neuroscience at MIT, currently held by Feng Zhang, and the James W. (1963) and Patricia T. Poitras Professor of Brain and Cognitive Sciences at MIT, held by Guoping Feng — and an annual postdoctoral fellowship at the McGovern Institute.
New initiatives at the Poitras Center
The Poitras family’s latest commitment to the Poitras Center will launch an ambitious set of new projects that bring together neuroscientists, clinicians, and computational experts to probe underpinnings of complex psychiatric disorders including schizophrenia, anxiety, and depression. These efforts reflect the center’s core mission: to speed scientific discovery and therapeutic innovation in the field of psychiatric brain disorders research.
McGovern cognitive neuroscientists Evelina Fedorenko PhD ’07, an associate professor, and Nancy Kanwisher ’80, PhD ’86, the Walter A. Rosenblith Professor of Cognitive Neuroscience — in collaboration with psychiatrist Ann Shinn of McLean Hospital — will explore how altered inner speech and reasoning contribute to the symptoms of schizophrenia. They will collect functional MRI data from individuals diagnosed with schizophrenia and matched controls as they perform reasoning tasks. The goal is to identify the brain activity patterns that underlie impaired reasoning in schizophrenia, a core cognitive disruption in the disorder.
A complementary line of investigation will focus on the role of inner speech — the “voice in our head” that shapes thought and self-awareness. The team will conduct a large-scale online behavioral study of neurotypical individuals to analyze how inner speech characteristics correlate with schizophrenia-spectrum traits. This will be followed by neuroimaging work comparing brain architecture among individuals with strong or weak inner voices and people with schizophrenia, with the aim of discovering neural markers linked to self-talk and disrupted cognition.
A different project led by McGovern neuroscientist and MIT Associate Professor Mark Harnett and 2024–2026 Poitras Center Postdoctoral Fellow Cynthia Rais focuses on how ketamine — an increasingly used antidepressant — alters brain circuits to produce rapid and sustained improvements in mood. Despite its clinical success, ketamine’s mechanisms of action remain poorly understood. The Harnett lab is using sophisticated tools to track how ketamine affects synaptic communication and large-scale brain network dynamics, particularly in models of treatment-resistant depression. By mapping these changes at both the cellular and systems levels, the team hopes to reveal how ketamine lifts mood so quickly — and inform the development of safer, longer-lasting antidepressants.
Guoping Feng is leveraging a new animal model of depression to uncover the brain circuits that drive major depressive disorder. The new animal model provides a powerful system for studying the intricacies of mood regulation. Feng’s team is using state-of-the-art molecular tools to identify the specific genes and cell types involved in this circuit, with the goal of developing targeted treatments that can fine-tune these emotional pathways.
“This is one of the most promising models we have for understanding depression at a mechanistic level,” says Feng, who is also associate director of the McGovern Institute. “It gives us a clear target for future therapies.”
Another novel approach to treating mood disorders comes from the lab of James DiCarlo, the Peter de Florez Professor of Neuroscience at MIT, who is exploring the brain’s visual-emotional interface as a therapeutic tool for anxiety. The amygdala, a key emotional center in the brain, is heavily influenced by visual input. DiCarlo’s lab is using advanced computational models to design visual scenes that may subtly shift emotional processing in the brain — essentially using sight to regulate mood. Unlike traditional therapies, this strategy could offer a noninvasive, drug-free option for individuals suffering from anxiety.
Together, these projects exemplify the kind of interdisciplinary, high-impact research that the Poitras Center was established to support.
“Mental illness affects not just individuals, but entire families who often struggle in silence and uncertainty,” adds Patricia Poitras. “Our hope is that Poitras Center scientists will continue to make important advancements and spark novel treatments for complex mental health disorders and, most of all, give families living with these conditions a renewed sense of hope for the future.”
Patricia (right) and James Poitras ’63 (center) with Guoping Feng, the the James W. (1963) and Patricia T. Poitras Professor of Brain and Cognitive Sciences at MIT.
One in every eight people — 970 million globally — live with mental illness, according to the World Health Organization, with depression and anxiety being the most common mental health conditions worldwide. Existing therapies for complex psychiatric disorders like depression, anxiety, and schizophrenia have limitations, and federal funding to address these shortcomings is growing increasingly uncertain.
Patricia and James Poitras ’63 have committed $8 million to the Poitras Center for Psychiatric Disorders Research to launch pioneering research initiatives aimed at uncovering the brain basis of major mental illness and accelerating the development of novel treatments.
“Federal funding rarely supports the kind of bold, early-stage research that has the potential to transform our understanding of psychiatric illness. Pat and I want to help fill that gap — giving researchers the freedom to follow their most promising leads, even when the path forward isn’t guaranteed,” says James Poitras, who is chair of the McGovern Institute for Brain Research board.
Their latest gift builds upon their legacy of philanthropic support for psychiatric disorders research at MIT, which now exceeds $46 million.
“With deep gratitude for Jim and Pat’s visionary support, we are eager to launch a bold set of studies aimed at unraveling the neural and cognitive underpinnings of major mental illnesses,” says Professor Robert Desimone, director of the McGovern Institute, home to the Poitras Center. “Together, these projects represent a powerful step toward transforming how we understand and treat mental illness.”
A legacy of support
Soon after joining the McGovern Institute Leadership Board in 2006, the Poitrases made a $20 million commitment to establish the Poitras Center for Psychiatric Disorders Research at MIT. The center’s goal, to improve human health by addressing the root causes of complex psychiatric disorders, is deeply personal to them both.
“We had decided many years ago that our philanthropic efforts would be directed towards psychiatric research. We could not have imagined then that this perfect synergy between research at MIT’s McGovern Institute and our own philanthropic goals would develop,” recalls Patricia.
The center supports research at the McGovern Institute and collaborative projects with institutions such as the Broad Institute of MIT and Harvard, McLean Hospital, Mass General Brigham, and other clinical research centers. Since its establishment in 2007, the center has enabled advances in psychiatric research including the development of a machine learning “risk calculator” for bipolar disorder, the use of brain imaging to predict treatment outcomes for anxiety, and studies demonstrating that mindfulness can improve mental health in adolescents.
For the past decade, the Poitrases have also fueled breakthroughs in the lab of McGovern investigator and MIT Professor Feng Zhang, backing the invention of powerful CRISPR systems and other molecular tools that are transforming biology and medicine. Their support has enabled the Zhang team to engineer new delivery vehicles for gene therapy, including vehicles capable of carrying genetic payloads that were once out of reach. The lab has also advanced innovative RNA-guided gene engineering tools such as NovaIscB, published in Nature Biotechnology in May 2025. These revolutionary genome editing and delivery technologies hold promise for the next generation of therapies needed for serious psychiatric illness.
In addition to fueling research in the center, the Poitras family has gifted two endowed professorships — the James and Patricia Poitras Professor of Neuroscience at MIT, currently held by Feng Zhang, and the James W. (1963) and Patricia T. Poitras Professor of Brain and Cognitive Sciences at MIT, held by Guoping Feng — and an annual postdoctoral fellowship at the McGovern Institute.
New initiatives at the Poitras Center
The Poitras family’s latest commitment to the Poitras Center will launch an ambitious set of new projects that bring together neuroscientists, clinicians, and computational experts to probe underpinnings of complex psychiatric disorders including schizophrenia, anxiety, and depression. These efforts reflect the center’s core mission: to speed scientific discovery and therapeutic innovation in the field of psychiatric brain disorders research.
McGovern cognitive neuroscientists Evelina Fedorenko PhD ’07, an associate professor, and Nancy Kanwisher ’80, PhD ’86, the Walter A. Rosenblith Professor of Cognitive Neuroscience — in collaboration with psychiatrist Ann Shinn of McLean Hospital — will explore how altered inner speech and reasoning contribute to the symptoms of schizophrenia. They will collect functional MRI data from individuals diagnosed with schizophrenia and matched controls as they perform reasoning tasks. The goal is to identify the brain activity patterns that underlie impaired reasoning in schizophrenia, a core cognitive disruption in the disorder.
A complementary line of investigation will focus on the role of inner speech — the “voice in our head” that shapes thought and self-awareness. The team will conduct a large-scale online behavioral study of neurotypical individuals to analyze how inner speech characteristics correlate with schizophrenia-spectrum traits. This will be followed by neuroimaging work comparing brain architecture among individuals with strong or weak inner voices and people with schizophrenia, with the aim of discovering neural markers linked to self-talk and disrupted cognition.
A different project led by McGovern neuroscientist and MIT Associate Professor Mark Harnett and 2024–2026 Poitras Center Postdoctoral Fellow Cynthia Rais focuses on how ketamine — an increasingly used antidepressant — alters brain circuits to produce rapid and sustained improvements in mood. Despite its clinical success, ketamine’s mechanisms of action remain poorly understood. The Harnett lab is using sophisticated tools to track how ketamine affects synaptic communication and large-scale brain network dynamics, particularly in models of treatment-resistant depression. By mapping these changes at both the cellular and systems levels, the team hopes to reveal how ketamine lifts mood so quickly — and inform the development of safer, longer-lasting antidepressants.
Guoping Feng is leveraging a new animal model of depression to uncover the brain circuits that drive major depressive disorder. The new animal model provides a powerful system for studying the intricacies of mood regulation. Feng’s team is using state-of-the-art molecular tools to identify the specific genes and cell types involved in this circuit, with the goal of developing targeted treatments that can fine-tune these emotional pathways.
“This is one of the most promising models we have for understanding depression at a mechanistic level,” says Feng, who is also associate director of the McGovern Institute. “It gives us a clear target for future therapies.”
Another novel approach to treating mood disorders comes from the lab of James DiCarlo, the Peter de Florez Professor of Neuroscience at MIT, who is exploring the brain’s visual-emotional interface as a therapeutic tool for anxiety. The amygdala, a key emotional center in the brain, is heavily influenced by visual input. DiCarlo’s lab is using advanced computational models to design visual scenes that may subtly shift emotional processing in the brain — essentially using sight to regulate mood. Unlike traditional therapies, this strategy could offer a noninvasive, drug-free option for individuals suffering from anxiety.
Together, these projects exemplify the kind of interdisciplinary, high-impact research that the Poitras Center was established to support.
“Mental illness affects not just individuals, but entire families who often struggle in silence and uncertainty,” adds Patricia Poitras. “Our hope is that Poitras Center scientists will continue to make important advancements and spark novel treatments for complex mental health disorders and, most of all, give families living with these conditions a renewed sense of hope for the future.”
Patricia (right) and James Poitras ’63 (center) with Guoping Feng, the the James W. (1963) and Patricia T. Poitras Professor of Brain and Cognitive Sciences at MIT.
Chemist Richard Liu harnesses sunlight to trap greenhouse gases
What tricks can organic molecules be taught to help solve our planet’s biggest problems?
That’s the question driving Assistant Professor Richard Y. Liu ’15 as he pushes the frontiers of organic chemistry in pursuit of cleaner synthesis, smarter materials, and new ways to combat climate change.
Liu’s latest advance, detailed in a new paper in Nature Chemistry, harnesses the power of sunshine to trigger a particular variety of organic molecule. As described in the paper, these “photobases” then rapidly generate hydroxide ions that efficiently and reversibly trap CO₂.
This innovation in direct air capture marks a significant step toward scalable, low-energy solutions for removing greenhouse gases, Liu said. “What distinguishes this current work is the way we developed molecular switches to capture and release CO₂ with light. The general strategy of using light directly as the energy source is a new approach.”
Liu’s drive to understand the inner workings of organic chemistry date to his years at Harvard College. “I started out thinking I’d be a physicist,” he said. “But in my first semester, I realized I was much more captivated by the creative act of building molecules in the chemistry lab.”
Richard Y. Liu.
File photo by Stephanie Mitchell/Harvard Staff Photographer
Under the guidance of Ted Betley, Erving Professor of Chemistry, Liu uncovered a passion for organic synthesis, or designing and assembling complex structures atom by atom. “My mentor noticed that what really excited me wasn’t the iron complexes we were supposed to be working on,” Liu said. “It was the challenge of making the organic ligands themselves.”
Betley encouraged Liu to pursue these interests by working with a group led by Eric Jacobsen, Sheldon Emery Professor of Chemistry. There, Liu learned to think about molecules in new ways, to ask big questions, and to take big risks.
That ethos remained central during his doctoral work at the Massachusetts Institute of Technology, where Liu worked with chemist Stephen Buchwald to invent new copper and palladium catalysts that allow complex molecules to be prepared from convenient and readily available building blocks.
Now leading his own lab in the Department of Chemistry and Chemical Biology, Liu focuses on issues spanning the fields of organic, inorganic, and materials chemistry. His group’s research centers on organic redox platforms, metal-based catalysts for synthesis, and mechanistic studies that reveal how chemical transformations unfold.
“We’re looking at how to manipulate nonmetals — in molecules that are cheap, abundant, and tunable — to do chemistry traditionally reserved for metals,” Liu said.
Their work isn’t just theoretical; it’s built for the real world. Liu’s group is also developing new organic materials for energy storage and catalysis, as well as molecules that can capture and activate greenhouse gases. The recent direct air capture development was the product of a collaboration with Daniel G. Nocera, Patterson Rockwood Professor of Energy, and exemplifies the Liu lab’s pursuit of applicable solutions.
“Direct air capture is one of the most important emerging climate technologies, but existing methods require too much energy,” he said. “By designing molecules that use light to change their chemical state and trap CO₂, we’re demonstrating a path to a more efficient — and possibly solar-powered — future.”
Also responsible for the discovery is the lab’s interdisciplinary team of chemists, materials scientists, and engineers.
“We all speak the language of organic synthesis, but each person has an area of deeper expertise — from electrochemistry to sulfur chemistry to computational modeling,” Liu said. “This means we are able to generate new ideas at the intersections.”
Educating the next generation of scientists is core to that mission, he added. “Ultimately, the research we do here is kind of a platform for training and education,” Liu said. “The projects we do are ultimately for students to have a compelling and complete thesis that earns them their Ph.D. and serves as a springboard for what they’re going to do in the future.”
Yet the recent disruptions in federal funding present what Liu calls “an existential threat.” The photobase research was supported mainly by Liu’s CAREER award from the National Science Foundation. Its recent cancellation has jeopardized the project’s future while disrupting the work of trainees.
“Research done at universities and institutions of higher learning will ultimately reap profits for all of society,” Liu said. “Our research is not driven by profits, but meant to make our discoveries and advancements publicly available for the world’s benefit.”
Weight loss drugs protect heart patients, study suggests
40% lower risk of hospitalization or death
Mass General Brigham Communications
3 min read
High-risk patients with heart failure had an over 40 percent lower risk of hospitalization or death after initiating weight loss drugs semaglutide or tirzepatide compared to placebo by proxy, according to a study out of Harvard-affiliated Mass General Brigham.
Specifically, researchers looked at heart failure with preserved ejection fraction (HFpEF), a condition where the heart’s ability to pump remains intact, yet the heart’s muscle has become so thick and stiff that the amount of blood being pumped doesn’t meet the body’s needs. This form of heart failure is especially common among people with obesity and Type 2 diabetes.
“Despite the widespread morbidity and mortality burden of HFpEF, current treatment options are limited,” said corresponding author Nils Krüger of the Division of Pharmacoepidemiology and Pharmacoeconomics at Brigham and Women’s Hospital and a postdoctoral research fellow at Harvard Medical School. “Both semaglutide and tirzepatide are well-known for their effects on weight loss and blood sugar control, but our study suggests they may also offer substantial benefits to patients with obesity and Type 2 diabetes by reducing adverse heart failure outcomes.”
By analyzing real-world data from over 90,000 HFpEF patients with obesity and Type 2 diabetes, researchers from MGB demonstrated that GLP-1 medications may significantly reduce the risk of hospitalization due to heart failure and all-cause mortality. Findings are published in JAMA and presented simultaneously at the European Society of Cardiology Congress.
Despite promising results from existing randomized controlled trials of semaglutide and tirzepatide in those with obesity-related HFpEF, regulatory authorities and professional societies have not approved or endorsed the use of these drugs for HFpEF, due in part to the studies’ relatively small sample sizes and unknown generalizability. The researchers therefore used data from three large U.S. insurance claims databases to emulate two previous, placebo-controlled trials of semaglutide and tirzepatide in new study populations that were an average of 19 times larger than those previously evaluated.
The researchers compared the one-year risk of heart failure hospitalization or death in new users of each GLP-1 drug to the risk of those outcomes in a “placebo” group of patients taking sitagliptin, a diabetes drug known to have no impact on HFpEF. After verifying the results of the previous, highly controlled studies, the researchers expanded their study population to make it more reflective of HFpEF cases in clinical practice, finding that overall, the drugs were associated with a greater than 40 percent reduction in heart failure hospitalization or all-cause mortality as compared with sitagliptin. Semaglutide and tirzepatide had similar effectiveness.
Notably, both drugs had acceptable safety profiles. In the future, the researchers hope to clarify the long-term impact of GLP-1 medications, the HFpEF subpopulations that may derive the most benefit from them, and whether the drugs are also effective in reducing other cardiovascular risks.
“By using nationwide data and an innovative methodological approach, our team was able to expand the findings of previous trials to larger populations more representative of HFpEF patients treated in clinical practice,” Krüger said. “Our findings show that in the future, GLP-1 targeting medications could provide a much-needed treatment option for patients with heart failure.”
A new and powerful particle detector just passed a critical test in its goal to decipher the ingredients of the early universe.
The sPHENIX detector is the newest experiment at Brookhaven National Laboratory’s Relativistic Heavy Ion Collider (RHIC) and is designed to precisely measure products of high-speed particle collisions. From the aftermath, scientists hope to reconstruct the properties of quark-gluon plasma (QGP) — a white-hot soup of subatomic particles known as quarks and gluons that is thought to have sprung into existence in the few microseconds following the Big Bang. Just as quickly, the mysterious plasma disappeared, cooling and combining to form the protons and neutrons that make up today’s ordinary matter.
Now, the sPHENIX detector has made a key measurement that proves it has the precision to help piece together the primordial properties of quark-gluon plasma.
In a paper in the Journal of High Energy Physics, scientists including physicists at MIT report that sPHENIX precisely measured the number and energy of particles that streamed out from gold ions that collided at close to the speed of light.
Straight ahead
This test is considered in physics to be a “standard candle,” meaning that the measurement is a well-established constant that can be used to gauge a detector’s precision.
In particular, sPHENIX successfully measured the number of charged particles that are produced when two gold ions collide, and determined how this number changes when the ions collide head-on, versus just glancing by. The detector’s measurements revealed that head-on collisions produced 10 times more charged particles, which were also 10 times more energetic, compared to less straight-on collisions.
“This indicates the detector works as it should,” says Gunther Roland, professor of physics at MIT, who is a member and former spokesperson for the sPHENIX Collaboration. “It’s as if you sent a new telescope up in space after you’ve spent 10 years building it, and it snaps the first picture. It’s not necessarily a picture of something completely new, but it proves that it’s now ready to start doing new science.”
“With this strong foundation, sPHENIX is well-positioned to advance the study of the quark-gluon plasma with greater precision and improved resolution,” adds Hao-Ren Jheng, a graduate student in physics at MIT and a lead co-author of the new paper. “Probing the evolution, structure, and properties of the QGP will help us reconstruct the conditions of the early universe.”
The paper’s co-authors are all members of the sPHENIX Collaboration, which comprises over 300 scientists from multiple institutions around the world, including Roland, Jheng, and physicists at MIT’s Bates Research and Engineering Center.
“Gone in an instant”
Particle colliders such as Brookhaven’s RHIC are designed to accelerate particles at “relativistic” speeds, meaning close to the speed of light. When these particles are flung around in opposite, circulating beams and brought back together, any smash-ups that occur can release an enormous amount of energy. In the right conditions, this energy can very briefly exist in the form of quark-gluon plasma — the same stuff that sprung out of the Big Bang.
Just as in the early universe, quark-gluon plasma doesn’t hang around for very long in particle colliders. If and when QGP is produced, it exists for just 10 to the minus 22, or about a sextillionth, of a second. In this moment, quark-gluon plasma is incredibly hot, up to several trillion degrees Celsius, and behaves as a “perfect fluid,” moving as one entity rather than as a collection of random particles. Almost immediately, this exotic behavior disappears, and the plasma cools and transitions into more ordinary particles such as protons and neutrons, which stream out from the main collision.
“You never see the QGP itself — you just see its ashes, so to speak, in the form of the particles that come from its decay,” Roland says. “With sPHENIX, we want to measure these particles to reconstruct the properties of the QGP, which is essentially gone in an instant.”
“One in a billion”
The sPHENIX detector is the next generation of Brookhaven’s original Pioneering High Energy Nuclear Interaction eXperiment, or PHENIX, which measured collisions of heavy ions generated by RHIC. In 2021, sPHENIX was installed in place of its predecessor, as a faster and more powerful version, designed to detect quark-gluon plasma’s more subtle and ephemeral signatures.
The detector itself is about the size of a two-story house and weighs around 1,000 tons. It sits at the intersection of RHIC’s two main collider beams, where relativistic particles, accelerated from opposite directions, meet and collide, producing particles that fly out into the detector. The sPHENIX detector is able to catch and measure 15,000 particle collisions per second, thanks to its novel, layered components, including the MVTX, or micro-vertex — a subdetector that was designed, built, and installed by scientists at MIT’s Bates Research and Engineering Center.
Together, the detector’s systems enable sPHENIX to act as a giant 3D camera that can track the number, energy, and paths of individual particles during an explosion of particles generated by a single collision.
“SPHENIX takes advantage of developments in detector technology since RHIC switched on 25 years ago, to collect data at the fastest possible rate,” says MIT postdoc Cameron Dean, who was a main contributor to the new study’s analysis. “This allows us to probe incredibly rare processes for the first time.”
In the fall of 2024, scientists ran the detector through the “standard candle” test to gauge its speed and precision. Over three weeks, they gathered data from sPHENIX as the main collider accelerated and smashed together beams of gold ions traveling at the speed of light. Their analysis of the data showed that sPHENIX accurately measured the number of charged particles produced in individual gold ion collisions, as well as the particles’ energies. What’s more, the detector was sensitive to a collision’s “head-on-ness,” and could observe that head-on collisions produced more particles with greater energy, compared to less direct collisions.
“This measurement provides clear evidence that the detector is functioning as intended,” Jheng says.
“The fun for sPHENIX is just beginning,” Dean adds. “We are currently back colliding particles and expect to do so for several more months. With all our data, we can look for the one-in-a-billion rare process that could give us insights on things like the density of QGP, the diffusion of particles through ultra-dense matter, and how much energy it takes to bind different particles together.”
This work was supported, in part, by the U.S. Department of Energy Office of Science, and the National Science Foundation.
The sPHENIX detector is the newest experiment at Brookhaven National Laboratory’s Relativistic Heavy Ion Collider (RHIC) and is designed to precisely measure products of high-speed particle collisions. This image shows the installation of the inner hadronic calorimeter within the core of the sPHENIX superconducting solenoid magnet.
For a long time, Satik Movsesyan envisioned a future of working in finance and also pursuing a full-time master’s degree program at the MIT Sloan School of Management. She says the MITx MicroMasters Program in Finance provides her with the ideal opportunity to directly enhance her career with courses developed and delivered by MIT Sloan faculty.
Movsesyan first began actively pursuing ways to connect with the MIT community as a first-year student in her undergraduate program at the American University of Armenia, where she majored in business with a concentration in accounting and finance. That’s when she discovered the MicroMasters Program in Finance. Led by MIT Open Learning and MIT Sloan, the program offers learners an opportunity to advance in the finance field through a rigorous, comprehensive online curriculum comprising foundational courses, mathematical methods, and advanced modeling. During her senior year, she started taking courses in the program, beginning with 15.516x (Financial Accounting).
“I saw completing the MicroMasters program as a way to accelerate my time at MIT offline, as well as to prepare me for the academic rigor,” says Movsesyan. “The program provides a way for me to streamline my studies, while also working toward transforming capital markets here in Armenia — in a way, also helping me to streamline my career.”
Movsesyan initially started as an intern at C-Quadrat Ampega Asset Management Armenia and was promoted to her current role of financial analyst. The firm is one of two pension asset managers in Armenia. Movsesyan credits the MicroMasters program with helping her to make deeper inferences in terms of analytical tasks and empowering her to create more enhanced dynamic models to support the efficient allocation of assets. Her learning has enabled her to build different valuation models for financial instruments. She is currently developing a portfolio management tool for her company.
“Although the courses are grounded deeply in theory, they never lack a perfect applicability component, which makes them very useful,” says Movsesyan. “Having MIT’s MicroMasters on a CV adds credibility as a professional, and your input becomes more valued by the employer.”
Movsesyan says that the program has helped her to develop resilience, as well as critical and analytical thinking. Her long-term goal is to become a portfolio manager and ultimately establish an asset management company, targeted at offering an extensive range of funds based on diverse risk-return preferences of investors, while promoting transparent and sustainable investment practices.
“The knowledge I’ve gained from the variety of courses is a perfect blend which supports me day-to-day in building solutions to existing problems in asset management,” says Movsesyan.
In addition to being a learner in the program, Movsesyan serves as a community teaching assistant (CTA). After taking 15.516x, she became a CTA for that course, working with learners around the world. She says that this role of helping and supporting others requires constantly immersing herself in the course content, which also results in challenging herself and mastering the material.
“I think my story with the MITx MicroMasters Program is proof that no matter where you are — even if you’re in a small, developing country with limited resources — if you truly want to do something, you can achieve what you want,” says Movsesyan. “It’s an example for students around the world who also have transformative ideas and determination to take action. They can be a part of the MIT community.”
“My story with the MITx MicroMasters Program is proof that no matter where you are — even if you’re in a small, developing country with limited resources — if you truly want to do something, you can achieve what you want,” says Satik Movsesyan, who completed the MITx MicroMasters Program in Finance following her graduation from the American University of Armenia in 2024.
The finding could pave the way for a new type of treatment for glioblastoma, the most aggressive form of brain cancer, although extensive testing will be required before it can be trialled in patients. Glioblastoma is the most common type of brain cancer, with a five-year survival rate of just 5%.
The researchers, from the University of Cambridge, found that cancer cells rely on the flexibility of hyaluronic acid (HA) — a sugar-like polymer that makes up much of the brain’s supporting structure — to latch onto receptors on the surface of cancer cells to trigger their spread throughout the brain.
By locking HA molecules in place so that they lose this flexibility, the researchers were able to ‘reprogramme’ glioblastoma cells so they stopped moving and were unable to invade surrounding tissue. Their results are reported in the journal Royal Society Open Science.
“Fundamentally, hyaluronic acid molecules need to be flexible to bind to cancer cell receptors,” said Professor Melinda Duer from Cambridge’s Yusuf Hamied Department of Chemistry, who led the research. “If you can stop hyaluronic acid being flexible, you can stop cancer cells from spreading. The remarkable thing is that we didn’t have to kill the cells — we simply changed their environment, and they gave up trying to escape and invade neighbouring tissue.”
Glioblastoma, like all brain cancers, is difficult to treat. Even when tumours are surgically removed, cancer cells that have already infiltrated the brain often cause regrowth within months. Current drug treatments struggle to penetrate the tumour mass, and radiotherapy can only delay, not prevent, recurrence of the cancer.
However, the approach developed by the Cambridge team does not target tumour cells directly, but instead attempts to change the tumour’s surrounding environment – the extracellular matrix – to stop its spread.
“Nobody has ever tried to change cancer outcomes by changing the matrix around the tumour,” said Duer. “This is the first example where a matrix-based therapy could be used to reprogramme cancer cells.”
Using nuclear magnetic resonance (NMR) spectroscopy, the team showed that HA molecules twist into shapes that allow them to bind strongly to CD44 — a receptor on cancer cells that drives invasion. When HA was cross-linked and ‘frozen’ into place, those signals were shut down.
The effect was seen even at low concentrations of HA, suggesting the cells were not being physically trapped but instead reprogrammed into a dormant state.
The study may also explain why glioblastoma often returns at the site of surgery. A build-up of fluid, or oedema, at the surgical site dilutes HA, making it more flexible and potentially encouraging cell invasion. By freezing HA in place, it could be possible to prevent recurrence.
“This could be a real opportunity to slow glioblastoma progression,” said Duer. “And because our approach doesn’t require drugs to enter every single cancer cell, it could in principle work for many solid tumours where the surrounding matrix drives invasion.
“Cancer cells behave the way they do in part because of their environment. If you change their environment, you can change the cells.”
The researchers are hoping to conduct further testing in animal models, which could lead to clinical trials in patients.
The research was supported in part by the European Research Council and the Engineering and Physical Sciences Research Council (EPSRC), part of UK Research and Innovation (UKRI). Melinda Duer is a Fellow of Robinson College, Cambridge.
Measurements analysed by an international research team led by ETH Zurich show that the global ocean absorbed significantly less CO₂ than anticipated during the unprecedented marine heatwave in 2023.
EPFL, ETH Zurich and the Swiss National Supercomputing Centre (CSCS) released Apertus 2 September, Switzerland’s first large-scale, open, multilingual language model — a milestone in generative AI for transparency and diversity.
For university students, summer break is more than a time to relax – it is also an opportunity for learning beyond the classroom.
Jazmine Lin, a third-year NUS Political Science undergraduate, participated in the Temasek-Foundation – NUS Leadership Enrichment and Regional Networking (TF - NUS LEaRN) Programme 2025 during the recent summer break, discovering that it offered her the best of both worlds: time to recharge combined with enriching learning experiences. She was among the 59 university students from across Southeast Asia who were able to learn more about the region, develop their leadership capabilities collaboratively and form new friendships through the programme.
Immersing in Chiang Mai’s culture
The programme kicked off in May with a two-week immersion in Chiang Mai, Thailand, hosted by the Language Institute Chiang Mai University (CMU). Thirty students from NUS, Singapore Institute of Technology and Singapore University of Social Sciences had the opportunity to interact with local community leaders and participate in various leadership development workshops. Through field trips, they were able to learn more about the different communities, their unique cultural practices and identities, as well as how the locals tapped into their surrounding resources to make a living.
One such visit took students to Nai Suan (which means ‘In the Garden’ in Thai), a community enterprise in the Mae Rim district that upcycles fallen leaves into biodegradable bowls and plates. This initiative not only promotes sustainable living, but it is also a source of income for the locals who collect the leaves. Students were invited to experience the process themselves – from washing the leaves and removing their veins, to moulding them into the final products by using a hydraulic press.
In the Doi Saket district, students visited the Ban Baiboon Thai-Tai Lue Wisdom Learning Center, which aims to preserve the indigenous Tai Lue culture by offering homestays and tours, as well as workshops where visitors can try their hand at various traditional crafts. Through touring traditional Tai Lue houses, observing the process of handicraft-making and sampling local snacks, students gained first-hand experience of the community’s authentic way of life.
The Chiang Mai leg concluded with a Sustainable Environment Hackathon, where students applied their learnings to develop solutions to address regional environmental challenges. They pitched their ideas to the faculty members from CMU’s Faculty of Science, sparking a lively exchange of innovation and teamwork.
Deepening regional understanding in Singapore
Upon returning to Singapore in July for the next leg of the programme, the students were joined by 29 peers from 19 universities across Southeast Asia.
In the first week, the students focused on community leadership, which emphasised skills such as teamwork and active listening. Students from the Nanyang Technological University and Singapore Management University TF-LEaRN programme also participated in the activities, creating a more dynamic environment.
For the second week, students explored the three key themes of greenery, water and racial harmony, all of which are hallmarks of Singapore’s identity and intrinsically important to national development. By participating in leadership seminars, fireside chats with NUS students and alumni who shared their experiences in community leadership, as well as learning journeys to Marina Barrage, Ba’alwie Mosque and a Veggie Rescue at Little India activity, they gained insights into Singapore’s approach to urban sustainability and multiculturalism. Many overseas students shared how this approach contrasted with those from their home countries, engendering enriching discussions on the region’s shared challenges and diverse approaches.
In the final week, students took part in a futures thinking segment conducted by the Lee Kuan Yew School of Public Policy at NUS, where they were introduced to tools such as horizon scanning and scenario communication, which are useful in anticipating trends and challenges, as well as developing effective strategies.
They were later divided into groups and tasked with identifying and addressing a community development issue in a Southeast Asian country of their choice. Through prototyping, brainstorming and presenting their ideas, students honed their ability to collaborate across cultural and academic disciplinary boundaries.
The entire programme has been an eye-opening experience for Jazmine. “Through the various talks, lectures and learning journeys, I saw how community leadership can come in many different forms. It was interesting to witness how different ideas came to life in both countries and the experience was made richer with the perspectives and insights from our Southeast Asian buddies. But beyond all the learnings, what stayed with me the most were the friendships built, and I believe this will endure well beyond the programme.”
President Alan Garber welcomes the Class of 2029 during Convocation.
Photos by Veasey Conway/Harvard Staff Photographer
Christina Pazzanese
Harvard Staff Writer
4 min read
Garber urges Class of 2029 to teach, learn from one another, reject viewing world in simple binaries
Alan Garber can still recall arriving on campus as a first-year in 1973.
The University president told the Class of 2029 in his Convocation address Monday afternoon that it quickly becomes apparent to all that Harvard is that rare place that offers almost limitless opportunities to experiment and explore whatever intellectual pursuits or interests students have.
But, he counseled the group, they must take care to avoid overlooking the one resource that may very well turn out to be the most valuable and enduring — each other.
“Each of you is here to teach as you learn,” said Garber, an economist, physician, and healthcare policy expert to the students, faculty, and others gathered at Tercentenary Theatre. “You are here to share your experience and perspective so that our community can be one in which all people are welcomed, all ideas are given due consideration, and all beliefs are treated with respect.”
Held just before classes begin each fall semester, Convocation serves as the University’s official welcome to first-years marking the start of their new lives as undergraduates. Harvard officials addressed students, offered some tried-and-true wisdom about College life, and shared some of the University’s values, history, and traditions.
Garber told the class they share two qualities: All are exceptional students, and all are “capable of making interesting and unusual decisions, not always the ones that others would make.”
That kind of openness and creativity springs from a certain mindset, he said: “You reject ‘either/or.’ You are the kind of ‘both/and’ people that this institution has nurtured, empowered, and celebrated throughout its long history.”
While the first semester of College is exciting and a bit daunting, Garber advised students to resist the urge to seek refuge in the familiar. Instead, he said, embrace feeling uncomfortable and pursue new people and experiences that are unfamiliar, “consider the difficulties and challenges you encounter to be invitations to improve and ultimately to excel.”
Recounting his own undergraduate struggles at Harvard, Garber recalled one classmate who at first seemed brash and intellectually intimidating. But after he set aside his preconceptions about the classmate and took a chance, the two soon became pals, then roommates, and today remain longtime friends.
First-year students attend Convocation.
Harvard’s alma mater “Fair Harvard” fills Tercentenary Theatre.
Garber holds the Class of 2029 banner during the traditional group photo following Convocation.
“Some of these friendships will form easily and require little to no tending. Others will demand effort to take hold. Those are the ones that will evolve in ways you cannot anticipate — that will lead to debate and argument, conflict and reconciliation, growth and change,” Garber said. “Those are the ones worth pursuing intently because they will deepen your understanding and enlarge your spirit.”
The University’s 31st president noted that many of the students “had to surmount a plethora of obstacles to be part of this class. I know some of you worried that you would not be able to make the journey here — would not be able to become part of our community. We are so glad to see you.
“Harvard would not be Harvard if it did not include inquisitive, ambitious students from across the United States and around the world,” he said to widespread applause.
Making his debut as the new Danoff Dean of Harvard College, David Deming, Ph.D. ’10, urged students to view this period of technological disruption, with the rapid growth of AI and its impact on their future job prospects, not with dread but as a huge opportunity to be bold, dream big, and blaze new trails.
An economist who studies education policy, Deming was most recently academic dean at Harvard Kennedy School and served as a faculty dean at Kirkland House before starting in his new role July 1.
Other University officials joined Garber and Deming on stage, including the Rev. Matthew I. Potts, Pusey Minister in Harvard’s Memorial Church, who delivered the invocation; Dean of Undergraduate Education Amanda Claybaugh; Dean of Students Thomas Dunne; and Nekesa Straker, senior assistant dean of resident life and first-year students.
Student musical groups the Harvard University Band, the Kuumba Singers, and the Harvard Choruses all performed.
“Today, we mark much more than just your beginning here,” Garber said at the end of his address. “We mark your belonging here.”
The visit brought together fundamental plant science research with crop and Agri-Tech researchers from across the University for a series of research demonstrations and a roundtable discussion.
Mr Zeichner toured the award-winning facility, meeting researchers in the open-plan office and lab spaces, which foster collaboration and advances in multi-disciplinary research.
The Minister saw exciting examples of foundational research, which have the potential to transform agriculture and ensure long term sustainability.
The first demonstration was led by Dr Sebastian Schornack and PhD student Nicolas Garcia Hernandez, who are investigating the plant developmental processes. The Minister saw through the microscope how they are using beetroot pigments to enable us to see how fungi is colonising living plant roots. This research allows us to track and measure in real time how chemicals, soil tillage and environmental conditions impact this beneficial plant-microbe relationship.
Mr Zeichner then visited the Lab’s microscopy room, and met with Dr Madelaine Bartlett and her colleague Terice Kelly. Dr Madelaine Bartlett's team researches the development of maize flowers (among other grass and cereal species) with a particular focus on the genetics behind these specialised flowers and future crop improvement. The team demonstrated how they image a maize flower on the Lab’s desktop scanning electron microscope.
The Sainsbury Laboratory boasts its own Bee Room, where Dr Edwige Moyroud demonstrated how bumble bees are helping to reveal the characteristics of petal patterns that are most important for attracting pollinators. Dr Moyroud and her team are identifying the genes that plants use to produce patterns that attract pollinators by combining various research techniques, including experiments, modelling, microscopy and bee behaviour.
Finally, overlooking Cambridge Botanic Gardens, academics from the Department of Plant Sciences and the Crop Science Centre presented on research into regenerative agriculture and using AI to measure and prevent crop disease.
Professor Lynn Dicks presented on the latest findings of the H3 research on regenerative agriculture. Professor Dicks and colleagues, during this ongoing five-year project, have worked collaboratively with farming clusters in the UK to study the impacts of a transition to regenerative agriculture, which has so far has been shown to improve soil health and reduce the use of chemicals.
Professor Eves-van Den Akker and his team, based at the University’s Crop Science Centre, have combined low-cost 3D printing of custom imaging machines with state-of-the-art deep-learning algorithms to make millions of measurements, of tens of thousands of parasites across hundreds of genotypes. They are now working with companies to translate this fundamental research, with the aim of accelerating their breeding programs for crop resistance to pests and disease.
The visit concluded with a discussion of the UK’s leading strengths in Agri-Tech and crop science, and how the UK and Cambridge are an attractive place for researchers from around the world to work, and make exciting advances, with global impact.
The University of Cambridge hosted a visit from local MP, and Farming Minister, Daniel Zeichner MP at the Sainsbury Laboratory.
The discovery – found in a study in mice – sheds light on the role that inflammation can play in mood disorders and could help in the search for new treatments, in particular for those individuals for whom current treatments are ineffective.
Around 1 billion people will be diagnosed with a mood disorder such as depression or anxiety at some point in their life. While there may be many underlying causes, chronic inflammation – when the body’s immune system stays active for a long time, even when there is no infection or injury to fight – has been linked to depression. This suggests that the immune system may play an important role in the development of mood disorders.
Previous studies have highlighted how high levels of an immune cell known as a neutrophil, a type of white blood cell, are linked to the severity of depression. But how neutrophils contribute to symptoms of depression is currently unclear.
In research published today in Nature Communications, a team led by scientists at the University of Cambridge, UK, and the National Institute of Mental Health, USA, tested a hypothesis that chronic stress can lead to the release of neutrophils from bone marrow in the skull. These cells then collect in the meninges – membranes that cover and protect your brain and spinal cord – and contribute to symptoms of depression.
As it is not possible to test this hypothesis in humans, the team used mice exposed to chronic social stress. In this experiment, an ‘intruder’ mouse is introduced into the home cage of an aggressive resident mouse. The two have brief daily physical interactions and can otherwise see, smell, and hear each other.
The researchers found that prolonged exposure to this stressful environment led to a noticeable increase in levels of neutrophils in the meninges, and that this was linked to signs of depressive behaviour in the mice. Even after the stress ended, the neutrophils lasted longer in the meninges than they did in the blood. Analysis confirmed the researchers’ hypothesis that the meningeal neutrophils – which appeared subtly different from those found in the blood – originated in the skull.
Further analysis suggested that long-term stress triggered a type of immune system ‘alarm warning’ known as type I interferon signalling in the neutrophils. Blocking this pathway – in effect, switching off the alarm – reduced the number of neutrophils in the meninges and improved behaviour in the depressed mice. This pathway has previously been linked to depression – type 1 interferons are used to treat patients with hepatitis C, for example, but a known side effect of the medication is that it can cause severe depression during treatment.
Dr Stacey Kigar from the Department of Medicine at the University of Cambridge said: “Our work helps explain how chronic stress can lead to lasting changes in the brain’s immune environment, potentially contributing to depression. It also opens the door to possible new treatments that target the immune system rather than just brain chemistry.
“There’s a significant proportion of people for whom antidepressants don’t work, possibly as many as one in three patients. If we can figure out what's happening with the immune system, we may be able to alleviate or reduce depressive symptoms.”
The reason why there are high levels of neutrophils in the meninges is unclear. One explanation could be that they are recruited by microglia, a type of immune cell unique to the brain. Another possible explanation is that chronic stress may cause microhaemorrhages, tiny leaks in brain blood vessels, and that neutrophils – the body’s ‘first responders’ – arrive to fix the damage and prevent any further damage. These neutrophils then become more rigid, possibly getting stuck in brain capillaries and causing further inflammation in the brain.
Dr Mary-Ellen Lynall from the Department of Psychiatry at the University of Cambridge said: “We’ve long known that something is different about how neutrophils behave after stressful events, or during depression, but we didn’t know what these neutrophils were doing, where they were going, or how they might be affecting the brain and mind. Our findings show that these ‘first responder’ immune cells leave the skull bone marrow and travel to the brain, where they can influence mood and behaviour.
“Most people will have experienced how our immune systems can drive short-lived depression-like symptoms. When we are sick, for example with a cold or flu, we often lack energy and appetite, sleep more and withdraw from social contact. If the immune system is always in a heightened, pro-inflammatory state, it shouldn’t be too surprising if we experience longer-term problems with our mood.”
The findings could provide a useful signature, or ‘biomarker’, to help identify those patients whose mood disorders are related to inflammation. This could help in the search for better treatments. For example, a clinical trial of a potential new drug that targets inflammation of the brain in depression might appear to fail if trialled on a general cohort of people with depression, whereas using the biomarker to identify individuals whose depression is linked to inflammation could increase the likelihood of the trial succeeding.
The findings may also help explain why depression is a symptom common in other neurological disorders such as stroke and Alzheimer’s disease, as it may be the case that neutrophils are being released in response to the damage to the brain seen in these conditions. But it may also explain why depression is itself a risk factor for dementia in later life, if neutrophils can themselves trigger damage to brain cells.
The research was funded by the National Institute of Mental Health, Medical Research Council and National Institute for Health and Care Research Cambridge Biomedical Research Centre.
Immune cells released from bone marrow in the skull in response to chronic stress and adversity could play a key role in symptoms of depression and anxiety, say researchers.
There’s a significant proportion of people for whom antidepressants don’t work. If we can figure out what's happening with the immune system, we may be able to alleviate or reduce depressive symptoms
By Dr Chew Han Ei, Senior Research Fellow and Head of Governance and Economy Cluster at the Institute of Policy Studies, Lee Kuan Yew School of Public Policy at NUS
After 25 years at the NASA Jet Propulsion Laboratory, Richard Kornfeld is returning to his alma mater. Starting in September, he will take over the operational management of ETH Zurich Space, bringing extensive experience of space missions to his new role.
‘Future of Jobs’ report highlights value of emotional intelligence
A recent report on “The Future of Jobs” by the World Economic Forum found that while analytical thinking is still the most coveted skill among employers, several emotional intelligence skills (i.e., motivation, self-awareness, empathy, and active listening) rank among the top 10 in a list of 26 core competencies.
In this edited conversation with Ron Siegel, assistant professor of psychology at Harvard Medical School, he explains why emotional intelligence skills are crucial in the workplace, especially in the age of AI.
What’s emotional intelligence? Is it a different way of being smart?
It is a kind of being smart, but it’s not what we usually think of as being smart. In recent decades, psychologists who study intelligence have become aware that there are many different kinds of intelligence. You could think of somebody who has natural athletic ability as having a kind of body or coordination intelligence or somebody who has a natural math ability as having a good deal of mathematical intelligence, and so on.
When we look over human experience in the developed world, where many people have basic food, clothing, and shelter, there’s nonetheless a great deal of conflict and unhappiness. Most of this strife involves the challenges of working with our emotions as humans, and particularly the complexity of our reactions in relationships. Emotional intelligence is a particular skill of recognizing one’s own feelings, working with those feelings, and not just reacting in ways that are going to be problematic. It also involves recognizing the feelings that are arising in others, and then being able to work with others, to work out conflicts, or get along well with one another.
Why do employers consider emotional intelligence one of the top core skills needed to thrive in the workplace?
The importance of emotional competence comes from the observation in the business world, in academia, the military, and every human enterprise, that there are people who are highly competent in technical and analytical skills, but when they interact with others, projects stall. So many resources are wasted in emotional misunderstandings or in people’s difficulty with emotional regulation. We humans are grossly inefficient in trying to get things done because most of our energy is spent on trying to make sure we look good, or on making sure that people think of us in a certain way, or on getting triggered by one another. I suspect that business leaders have realized that it’s relatively easy to get technical expertise in almost anything, but to get people who can understand and get along with one another, that is a challenge. In many projects, there is a growing awareness that this skill is going to be the one that carries the day.
Can you talk about the evolution of the concept of emotional intelligence since publication of the 1995 book “Emotional Intelligence” by Daniel Goleman, Ph.D. ’74?
Humans have known about this for a long time. Western industrialized cultures have very much favored other forms of intelligence, like logical analytical ability, mathematical ability, and entrepreneurial skills over relational skills and the ability to connect with feelings and connect with one another. Over the years, psychologists have become more aware of a strong cultural bias toward certain kinds of intelligence and against other kinds of intelligence, and they have tried to rectify that by looking at emotional intelligence. And when Daniel Goleman wrote his landmark book, people started realizing that there are many people who may have high SAT and GRE scores but are not thriving in life or even succeeding in their work. And when we look at why that is, it turns out that they don’t know how to manage their own emotions or how to read other people’s emotions, and they don’t know how to get along effectively with other people, while other people with far lower GRE and SAT scores have skills to understand and read people and can get a team together and lead them to accomplish things and have great success. There’s a growing realization that emotional intelligence matters, even for external material, goal-oriented activities.
Are emotional intelligence skills relevant in the age of AI?
As people increasingly are interacting with chatbots rather than real human beings to get their work done, I suspect that authentic, connected human interactions are going to become more important. Humans are hardwired to be a social species — we long for connection to others. We hate the experience of being ostracized and pushed out of the group. That’s in our basic primate nature, and I suspect that as more of people’s lives are engaged in interactions with AI, even though it does a nice job of imitating human responses, that people will long for simple, natural responses. That’s my hope, anyway, that people will value genuine connection rather than preferring to spend time with chatbots because “My chatbot is so much more complimentary toward me than my spouse or is so much more willing to change its mind to accommodate my needs.” I’m hoping we don’t just go for the chatbots because they’re better at boosting our egos.
“As people increasingly are interacting with chatbots rather than real human beings to get their work done, I suspect that authentic, connected human interactions are going to become more important.”
What are the components of emotional intelligence? How can we become emotionally competent?
The first component is self-awareness, which means being conscious of our own thoughts, feelings, and what’s happening inside of us. It is the capacity to notice that every simple interaction stimulates myriad different emotions and associations to all the other moments in our life. The second big area is self-regulation, which is the ability to manage our emotions in a healthy way. It means that we’re able to feel the full range of our emotions and yet not be overwhelmed by them. The third big component is social awareness or empathy, and that’s noticing what’s going on in others. This means being free enough of self-preoccupation so that we can see that other people have needs, desires, fears, and hurts, and so we can respond to them in appropriate ways. And the fourth big component is social skills, which is the ability to work well in teams, to be able to solve conflicts and help the team to cooperate.
Emotional competence is key in our personal lives too. I’m a clinical psychologist by training and I know that most people are not struggling because they can’t figure out the answer to a technical question. They are struggling because they can’t figure out how to get along with their kids, their parents, their spouses, their siblings, their neighbors, or their friends. How do we stop hurting each other’s feelings and find a way to feel safely connected and love one another? That’s our big challenge.
Curators and conservators at the Harvard Art Museums zoom in on the tiny details that tell big stories about some of their favorite works
Looking at art can be intimidating for the untrained. Is this piece impressionist or surrealist? What, exactly, makes it worthy of hanging in a museum?
“Ultimately, it’s subjective,” Lynette Roth, the Daimler Curator of the Busch-Reisinger Museum, told the Gazette in 2023. “I can’t convince you to like something because I say, ‘This is a major artist of the 20th century’ — you might not be interested in that. But my experience has been that it will grow on you as you have more context.”
We asked specialists from the Harvard Art Museums to lend us their expertise to help develop that context. Below, they home in on the tiny details that make pieces of art important.
Sparrows get new perch
“Wall Painting Fragment from the Villa at Boscotrecase,” 10 B.C.E.-1 B.C.E.
Kate Smith with “Wall Painting Fragment from the Villa at Boscotrecase.”
These sparrows were painted high up on the wall of a villa near Naples, Italy, about 2,000 years ago. Though they have suffered some paint loss, they are still recognizable and so lifelike; standing in a puddle of water, one is drinking and splashing. The original wall was part of a grand villa made for the emperor’s grandson; the whole structure was buried by a volcanic eruption in 79 C.E. When the villa was discovered and excavated in the early 20th century, the recovered fragments went to various museums and this single piece came to Harvard, where it lived in storage for almost a century. When the curators decided to display this piece of decorated wall in our Roman galleries in 2014, I reattached flaking paint and removed accumulated grime from the surface, revealing the bright colors and the glossy, polished red and yellow surfaces.
The birds would not have been very visible up near the ceiling, they were minor decorative elements. Now that this piece of wall lives in the museum at eye level, visitors can have a close look. I love how the coarsely ground mineral pigments used to paint them glitter in the light, how jumpy and flighty and alert the birds seem.
— Kate Smith, Senior Conservator of Paintings, Head of Paintings Lab
Retracing the creative process
“Leaping Antelopes,” c. 1745
Penley Knipe with “Leaping Antelopes.”
This small drawing from the Kota tradition of painting in India measures just 3½ by 7 inches. It has energetic antelopes leaping across it. As a paper conservator, I am tasked with the physical care of the various types of works on paper. What I love most is any and all evidence of the materials the artist may have used.
This drawing also has equally elegant swirls of ink, as the artist tests out various ink colors and dilutions. You can see many grays but there is also a bright orange squiggle and a chartreuse one as well — colors that don’t make an appearance as an antelope. One gets the impression that the paper is not only a mid-18th-century sheet of sketches where the artist works and reworks the prancing antelopes, but it is also a scratch pad. These details put us that much closer to the artist. Speaking of tiny details, don’t miss the small head at lower left as well. I find these small tidbits both delightful and informative.
— Penley Knipe, Philip and Lynn Straus Senior Conservator of Works of Art on Paper and Head of Paper Lab
Try to look away
“Child from the Old Town,” Ernst Thoms, 1925
Lynette Roth with “Child from the Old Town.”
Currently on view at the museums is a small painting with a monumental impact. In it, a child’s melancholic gaze is highlighted by the strong play of light and shadow on her forehead and around her mouth. The unnamed sitter is described in the work’s title only as an inhabitant of a city center, which we see behind her sketched thinly in oil paint.
In a period of economic and political instability in Germany after World War I, such areas were often plagued by housing instability and a lack of fresh green spaces for working-class families. By lending such dramatic contour to the young girl’s face — as if a spotlight were shining directly at her — Ernst Thoms makes her palpable and challenges us to consider the material circumstances of workers’ lives.
— Lynette Roth, Daimler Curator of the Busch-Reisinger Museum
Echoes of love verse
“Portrait of Maharaja Kumar Sawant Singh of Kishangarh,” 1745
Janet O’Brien with “Portrait of Maharaja Kumar Sawant Singh of Kishangarh.”
“Inhabit the garden of love, sing of the garden of love. Nagar says: enter the beloved’s dwelling in the garden of love.”
These are the words of Maharaja Sawant Singh, an 18th-century ruler of Kishangarh, Rajasthan, and a poet under the pen name Nagari Das.
In this portrait, the poet-king stands amidst pink roses in full bloom. Gazing down from the window above is his beloved. But my favorite tiny detail — and the most tender and touching one — is the female attendant holding the door ajar. With just the tip of her bejeweled nose and the edge of her red skirt visible, she reaches forward with a sprig of roses, inviting Sawant Singh to “enter the beloved’s dwelling in the garden of love.”
But these words do not accompany the painting. Rather, they are from one of his poems called the “Garden of Love” (or “ʿIshq Chaman”). Dedicated to the divine passion of Krishna for Radha, the poem is an expression of Sawant Singh’s ardent love for Bani Thani, a poet and singer, who is most likely the woman seated at the window.
— Janet O’Brien, Calderwood Curatorial Fellow in South Asian and Islamic Art
Can you spot the tiny animal?
“Garden Carpet,” 18th century
A tiny animal is hidden in the 18th-century wool carpet.
The Islamic Art gallery currently displays a monumental Persian carpet. Dating back to the 18th century, this wool carpet is adorned with a design inspired by gardens. Although many Persian rugs reference gardens through botanical ornament, this example presents a formal garden layout known as the chahar bagh (four-part garden). Such gardens, planted with fruit trees and separated by axial water channels, were an important part of the palatial and urban complexes of the Islamic era in Iran, Central Asia, and later in India. On this carpet, a wide stream of water, intersected by narrower channels, runs through flowerbeds. Amongst this rich design, a tiny animal, possibly a goat, is asymmetrically placed in one of the flowerbeds. Often invisible to the unaware, the little goat appears to be a token left by the weavers of this carpet. Although we do not know the artisans who produced this carpet following an earlier established design, the tiny animal is a reminder of their existence and the liberties they took to insert their identity, only to be revealed to the keen eye.
This silver coin minted in ancient Syracuse is truly remarkable. It is a superb example of miniature engraving. Although it is one of the largest ancient Greek denominations ever minted — worth 10 drachmai — it is only a third bigger in diameter than a U.S. dollar. Yet, the engraving is incredibly detailed: a four-horse chariot on the obverse — unfortunately not well-preserved on this specimen — and the head of the nymph Arethusa complete with a hairnet and jewelry on the reverse. Even more special is the fact that the engraver of the die — the punch used to strike the coin — signed his work! The letter K on the headband just above the forehead is his initial, and his full name is inscribed on the dolphin below her neck: KIMON. This is extremely rare. We only know of a few ancient die engravers by name and Kimon is the most famous and accomplished. There is something so moving about being able to refer to the artist by name, although we know almost nothing else about him. It is a link with this person who lived somewhere around Sicily over 2,400 years ago.
— Laure Marest, Damarete Associate Curator of Ancient Coins
An instant classic
“Marsha,” Dawoud Bey, 1998
Dawoud Bey’s diptych is a large-format type of Polaroid.
Depending on the generation you were born into, you might recall the 2003 Outkast music hit “Hey Ya!” that chorused “shake it like a Polaroid picture.” Believe it or not, this diptych is also a Polaroid picture, but considerably larger. One of these “instant” photographs is closer to 20 inches by 24 inches in size and was literally pulled from its even larger traditional view camera, only a handful of which were ever made and distributed across the globe. The emblematic “squash” of chemistry along each side is an artifact of the sophisticated dye diffusion process.
A light-sensitive sheet is exposed inside of the camera. That same sheet is then squeegeed against a second sheet (coated with dye-receiving material) through reagent pods and motor-driven spreader rolls as the sandwich is pulled out of the camera. After roughly 1½ minutes pass, the two sheets are masterfully peeled apart and the second sheet exhibits the recorded image in color. Like magic! Today, the Harvard Collection of Historical Scientific Instruments has one of the original 20×24 cameras.
“Large Eight-Lobed Mirror with Relief Decoration,” eighth century
Susan Costello shares an eighth-century Chinese bronze mirror.
While examining an eighth-century Chinese bronze mirror under the microscope, I discovered impressions of a long-lost textile hidden among the layers of red, green, and blue corrosion. These pseudomorphs formed over centuries during burial, as the organic fibers decayed and were replaced by copper corrosion perfectly preserving the fabric down to individual fibers. They offer a rare glimpse into ancient textiles that would otherwise be lost to time.
Besides being fascinating, these textile pseudomorphs help recover part of the mirror’s lost narrative. We have no archaeological context to tell us where the mirror was found, who owned it, or how it was placed in the grave, but the impressions left behind speak volumes. Found on both the front and back of the mirror, the pseudomorphs suggest the object was once carefully wrapped in cloth. This was an object owned by a living person who valued it in both life and death.
Finding this unexpected human connection to the past moved me, and the fact that it was not the original textile that survived, but traces of it, preserved through a chemical transformation, makes it all the more compelling.
Narayan Khandekar appreciates the marks of Pollock’s process.
This is a painting that is close to untouched condition with minimal conservation work, as if it has just left Betty Parsons Gallery. In this detail we can see the painting is stapled from the front to hold it onto a wooden stretcher. The canvas has drips and splashes of paint, and a single blue thread marks the selvedge. There is another selvedge on the opposite side, telling us this was the full width of the canvas roll. Knowing this, we can work out the steps of the painting’s creation. Pollock unrolled the canvas on the floor, and splashed paint onto the surface in his characteristic method. When he was finished, he cut the painting from the roll and, not wanting to lose any of the image, he stapled the canvas onto the stretcher from the front, sometimes through the paint. Almost all artists fold the canvas over the edge of the stretcher and attach it from the sides of back where it is out of sight, but that was not important to Pollock. This cluster of clues tells us so much from so little — it takes us from where we stand in the gallery back in time to watching Pollock at work in his studio.
— Narayan Khandekar, Director of the Straus Center for Conservation and Technical Studies and Senior Conservation Scientist
Art meets mechanics
“Light Prop for an Electric Stage,” László Moholy-Nagy, 1930
Peter Murphy stands with “Light Prop for an Electric Stage.”
This icon of the Busch-Reisinger Museum is the pinnacle of László Moholy-Nagy’s experiments at the Bauhaus. Throughout his tenure as faculty at the influential school of art and design, Moholy-Nagy envisioned how to bring his sculpture to life. It was only in 1930 — two years after leaving the Bauhaus — that he was able to realize his vision with the help of the German electronics company AEG, an engineer named Stefan Sebok, and a mechanic named Otto Ball. Through this collaboration, the Light Prop was able to come to life and move. Since then, the sculpture has struggled with malfunctions and damages, leading to many of its original parts being replaced. Except for the motor from Boston Gear, it’s nearly impossible to determine if a part is a replica. One easily bypassed detail, however, is original to the sculpture: a metal plaque on its platform that features Otto Ball’s name and logo. For me, this subtle trace is crucial not only for understanding the Light Prop’s history, but for recognizing that this groundbreaking sculpture has involved many hands across its many lives.
— Peter Murphy, Stefan Engelhorn Curatorial Fellow in the Busch-Reisinger Museum
“Research and education require not only that we speak, but that we listen to and learn from one another,” the president told students at his annual Orientation talk on free expression.
After ten years, the Future Resilient Systems programme at the Singapore-ETH Centre (SEC) is drawing to a close. In our interview, Programme Director Jonas Jörin talks about the programme's successes and the future of resilience research.
Cancer cells provide healthy neighbouring cells with additional cell powerhouses to put them to work. This has been demonstrated by researchers at ETH Zurich in a new study. In this way, cancer is exploiting a mechanism that frequently serves to repair damaged cells.
How likely you think something is to happen depends on what you already believe about the circumstances. That is the simple concept behind Bayes’ rule, an approach to calculating probabilities, first proposed in 1763. Now, an international team of researchers has shown how Bayes’ rule operates in the quantum world.
“I would say it is a breakthrough in mathematical physics,” said Professor Valerio Scarani, Deputy Director and Principal Investigator at the Centre for Quantum Technologies, and member of the team. His co-authors on the work published on 28 August 2025 in Physical Review Letters are Assistant Professor Ge Bai at the Hong Kong University of Science and Technology in China, and Professor Francesco Buscemi at Nagoya University in Japan.
“Bayes’ rule has been helping us make smarter guesses for 250 years. Now we have taught it some quantum tricks,” said Prof Buscemi.
While researchers before them had proposed quantum analogues for Bayes’ rule, they are the first to derive a quantum Bayes’ rule from a fundamental principle.
Conditional probability
Bayes’ rule is named for Thomas Bayes, who first defined his rules for conditional probabilities in ‘An Essay Towards Solving a Problem in the Doctrine of Chances’.
Consider a case in which a person tests positive for flu. They may have suspected they were sick, but this new information would change how they think about their health. Bayes’ rule provides a method to calculate the probability of flu conditioned not only on the test result and the chances of the test giving a wrong answer, but also on the individual’s initial beliefs.
Bayes’ rule interprets probabilities as expressing degrees of belief in an event. This has been long debated, since some statisticians think that probabilities should be “objective” and not based on beliefs. However, in situations where beliefs are involved, Bayes’ rule is accepted as a guide for reasoning. This is why it has found widespread use from medical diagnosis and weather prediction to data science and machine learning.
Principle of minimum change
When calculating probabilities with Bayes’ rule, the principle of minimum change is obeyed. Mathematically, the principle of minimum change minimises the distance between the joint probability distributions of the initial and updated belief. Intuitively, this is the idea that for any new piece of information, beliefs are updated in the smallest possible way that is compatible with the new facts. In the case of the flu test, for example, a negative test would not imply that the person is healthy, but rather that they are less likely to have the flu.
In their work, Prof Scarani, who is also from NUS Department of Physics, Asst Prof Bai, and Prof Buscemi began with a quantum analogue to the minimum change principle. They quantified change in terms of quantum fidelity, which is a measure of the closeness between quantum states.
Researchers always thought a quantum Bayes’ rule should exist because quantum states define probabilities. For example, the quantum state of a particle provides the probability of it being found at different locations. The goal is to determine the whole quantum state, but the particle is only found at one location when a measurement is performed. This new information will then update the belief, boosting the probability around that location.
The team derived their quantum Bayes’ rule by maximising the fidelity between two objects that represent the forward and the reverse process, in analogy with a classical joint probability distribution. Maximising fidelity is equivalent to minimising change. They found in some cases their equations matched the Petz recovery map, which was proposed by Dénes Petz in the 1980s and was later identified as one of the most likely candidates for the quantum Bayes’ rule based just on its properties.
“This is the first time we have derived it from a higher principle, which could be a validation for using the Petz map,” said Prof Scarani. The Petz map has potential applications in quantum computing for tasks such as quantum error correction and machine learning. The team plans to explore whether applying the minimum change principle to other quantum measures might reveal other solutions.
In an unhappy coincidence, the Covid-19 pandemic and Angie Jo’s doctoral studies in political science both began in 2019. Paradoxically, this global catastrophe helped define her primary research thrust.
As countries reacted with unprecedented fiscal measures to protect their citizens from economic collapse, Jo MCP ’19 discerned striking patterns among these interventions: Nations typically seen as the least generous on social welfare were suddenly deploying the most dramatic emergency responses.
“I wanted to understand why countries like the U.S., which famously offer minimal state support, suddenly mobilize an enormous emergency response to a crisis — only to let it vanish after the crisis passes,” says Jo.
Driven by this interest, Jo launched into a comparative exploration of welfare states that forms the backbone of her doctoral research. Her work examines how different types of welfare regimes respond to collective crises, and whether these responses lead to lasting institutional reforms or merely temporary patches.
A mismatch in investments
Jo’s research focuses on a particular subset of advanced industrialized democracies — countries like the United States, United Kingdom, Canada, and Australia — that political economists classify as “liberal welfare regimes.” These nations stand in contrast to the “social democratic welfare regimes” exemplified by Scandinavian countries.
“In everyday times, citizens in countries like Denmark or Sweden are already well-protected by a deep and comprehensive welfare state,” Jo explains. “When something like Covid hits, these countries were largely able to use the social policy tools and administrative infrastructure they already had, such as subsidized childcare and short-time work schemes that prevent mass layoffs.”
Liberal welfare regimes, however, exhibit a different pattern. During normal periods, "government assistance is viewed by many as the last resort,” Jo observes. “It’s means-tested and minimal, and the responsibility to manage risk is put on the individual.”
Yet when Covid struck, these same governments “spent historically unprecedented amounts on emergency aid to citizens, including stimulus checks, expanded unemployment insurance, child tax credits, grants, and debt forbearance that might normally have faced backlash from many Americans as government ‘handouts.’”
This stark contrast — minimal investment in social safety nets during normal times followed by massive crisis spending — lies at the heart of Jo’s inquiry. “What struck me was the mismatch: The U.S. invests so little in social welfare at baseline, but when crisis hits, it can suddenly unleash massive aid — just not in ways that stick. So what happens when the next crisis comes?”
From architecture to political economy
Jo took a winding path to studying welfare states in crisis. Born in South Korea, she moved with her family to California at age 3 as her parents sought an American education for their children. After moving back to Korea for high school, she attended Harvard University, where she initially focused on art and architecture.
“I thought I’d be an artist,” Jo recalls, “but I always had many interests, and I was very aware of different countries and different political systems, because we were moving around a lot.”
While studying architecture at Harvard, Jo’s academic focus pivoted.
“I realized that most of the decisions around how things get built, whether it’s a building or a city or infrastructure, are made by the government or by powerful private actors,” she explains. “The architect is the artist’s hand that is commissioned to execute, but the decisions behind it, I realized, were what interested me more.”
After a year working in macroeconomics research at a hedge fund, Jo found herself drawn to questions in political economy. “While I didn’t find the zero-sum game of finance compelling, I really wanted to understand the interactions between markets and governments that lay behind the trades,” she says.
Jo decided to pursue a master’s degree in city planning at MIT, where she studied the political economy of master-planning new cities as a form of industrial policy in China and South Korea, before transitioning to the political science PhD program. Her research focus shifted dramatically when the Covid-19 pandemic struck.
“It was the first time I realized, wow, these wealthy Western democracies have serious problems, too,” Jo says. “They are not dealing well with this pandemic and the structural inequalities and the deep tensions that have always been part of some of these societies, but are being tested even further by the enormity of this shock.”
The costs of crisis response
One of Jo’s key insights challenges conventional wisdom about fiscal conservatism. The assumption that keeping government small saves money in the long run may be fundamentally flawed when considering crisis response.
“What I’m exploring in my research is the irony that the less you invest in a capable, effective and well-resourced government, the more that backfires when a crisis inevitably hits and you have to patch up the holes,” Jo argues. “You’re not saving money; you’re deferring the cost.”
This inefficiency becomes particularly apparent when examining how different countries deployed aid during Covid. Countries like Denmark, with robust data systems connecting health records, employment information, and family data, could target assistance with precision. The United States, by contrast, relied on blunter instruments.
“If your system isn’t built to deliver aid in normal times, it won’t suddenly work well under pressure,” Jo explains. “The U.S. had to invent entire programs from scratch overnight — and many were clumsy, inefficient, or regressive.”
There is also a political aspect to this constraint. “Not only do liberal welfare countries lack the infrastructure to address crises, they are often governed by powerful constituencies that do not want to build it — they deliberately choose to enact temporary benefits that are precisely designed to fade,” Jo argues. “This perpetuates a cycle where short-term compensations are employed from crisis to crisis, constraining the permanent expansion of the welfare state.”
Missed opportunities
Jo’s dissertation also examines whether crises provide opportunities for institutional reform. Her second paper focuses on the 2008 financial crisis in the United States, and the Hardest Hit Fund, a program that allocated federal money to state housing finance agencies to prevent foreclosures.
“I ask why, with hundreds of millions in federal aid and few strings attached, state agencies ultimately helped so few underwater homeowners shed unmanageable debt burdens,” Jo says. “The money and the mandate were there — the transformative capacity wasn’t.”
Some states used the funds to pursue ambitious policy interventions, such as restructuring mortgage debt to permanently reduce homeowners’ principal and interest burdens. However, most opted for temporary solutions like helping borrowers make up missed payments, while preserving their original contract. Partisan politics, financial interests, and status quo bias are most likely responsible for these varying state strategies, Jo believes.
She sees this as “another case of the choice that governments have between throwing money at the problem as a temporary Band-Aid solution, or using a crisis as an opportunity to pursue more ambitious, deeper reforms that help people more sustainably in the long run.”
The significance of crisis response research
For Jo, understanding how welfare states respond to crises is not just an academic exercise, but a matter of profound human consequence.
“When there’s an event like the financial crisis or Covid, the scale of suffering and the welfare gap that emerges is devastating,” Jo emphasizes. “I believe political science should be actively studying these rare episodes, rather than disregarding them as once-in-a-century anomalies.”
Her research carries implications for how we think about welfare state design and crisis preparedness. As Jo notes, the most vulnerable members of society — “people who are unbanked, undocumented, people who have low or no tax liability because they don’t make enough, immigrants or those who don’t speak English or don’t have access to the internet or are unhoused” — are often invisible to relief systems.
As Jo prepares for her career in academia, she is motivated to apply her political science training to address such failures. “We’re going to have more crises, whether pandemics, AI, climate disasters, or financial shocks,” Jo warns. “Finding better ways to cover those people is essential, and is not something that our current welfare state — or our politics — are designed to handle.”
“I wanted to understand why countries like the U.S., which famously offer minimal state support, suddenly mobilize an enormous emergency response to a crisis — only to let it vanish after the crisis passes,” says PhD candidate Angie Jo.
Every year, global health experts are faced with a high-stakes decision: Which influenza strains should go into the next seasonal vaccine? The choice must be made months in advance, long before flu season even begins, and it can often feel like a race against the clock. If the selected strains match those that circulate, the vaccine will likely be highly effective. But if the prediction is off, protection can drop significantly, leading to (potentially preventable) illness and strain on health care systems.
This challenge became even more familiar to scientists in the years during the Covid-19 pandemic. Think back to the time (and time and time again), when new variants emerged just as vaccines were being rolled out. Influenza behaves like a similar, rowdy cousin, mutating constantly and unpredictably. That makes it hard to stay ahead, and therefore harder to design vaccines that remain protective.
To reduce this uncertainty, scientists at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the MIT Abdul Latif Jameel Clinic for Machine Learning in Health set out to make vaccine selection more accurate and less reliant on guesswork. They created an AI system called VaxSeer, designed to predict dominant flu strains and identify the most protective vaccine candidates, months ahead of time. The tool uses deep learning models trained on decades of viral sequences and lab test results to simulate how the flu virus might evolve and how the vaccines will respond.
Traditional evolution models often analyze the effect of single amino acid mutations independently. “VaxSeer adopts a large protein language model to learn the relationship between dominance and the combinatorial effects of mutations,” explains Wenxian Shi, a PhD student in MIT’s Department of Electrical Engineering and Computer Science, researcher at CSAIL, and lead author of a new paper on the work. “Unlike existing protein language models that assume a static distribution of viral variants, we model dynamic dominance shifts, making it better suited for rapidly evolving viruses like influenza.”
VaxSeer has two core prediction engines: one that estimates how likely each viral strain is to spread (dominance), and another that estimates how effectively a vaccine will neutralize that strain (antigenicity). Together, they produce a predicted coverage score: a forward-looking measure of how well a given vaccine is likely to perform against future viruses.
The scale of the score could be from an infinite negative to 0. The closer the score to 0, the better the antigenic match of vaccine strains to the circulating viruses. (You can imagine it as the negative of some kind of “distance.”)
In a 10-year retrospective study, the researchers evaluated VaxSeer’s recommendations against those made by the World Health Organization (WHO) for two major flu subtypes: A/H3N2 and A/H1N1. For A/H3N2, VaxSeer’s choices outperformed the WHO’s in nine out of 10 seasons, based on retrospective empirical coverage scores (a surrogate metric of the vaccine effectiveness, calculated from the observed dominance from past seasons and experimental HI test results). The team used this to evaluate vaccine selections, as the effectiveness is only available for vaccines actually given to the population.
For A/H1N1, it outperformed or matched the WHO in six out of 10 seasons. In one notable case, for the 2016 flu season, VaxSeer identified a strain that wasn’t chosen by the WHO until the following year. The model’s predictions also showed strong correlation with real-world vaccine effectiveness estimates, as reported by the CDC, Canada’s Sentinel Practitioner Surveillance Network, and Europe’s I-MOVE program. VaxSeer’s predicted coverage scores aligned closely with public health data on flu-related illnesses and medical visits prevented by vaccination.
So how exactly does VaxSeer make sense of all these data? Intuitively, the model first estimates how rapidly a viral strain spreads over time using a protein language model, and then determines its dominance by accounting for competition among different strains.
Once the model has calculated its insights, they’re plugged into a mathematical framework based on something called ordinary differential equations to simulate viral spread over time. For antigenicity, the system estimates how well a given vaccine strain will perform in a common lab test called the hemagglutination inhibition assay. This measures how effectively antibodies can inhibit the virus from binding to human red blood cells, which is a widely used proxy for antigenic match/antigenicity.
Outpacing evolution
“By modeling how viruses evolve and how vaccines interact with them, AI tools like VaxSeer could help health officials make better, faster decisions — and stay one step ahead in the race between infection and immunity,” says Shi.
VaxSeer currently focuses only on the flu virus’s HA (hemagglutinin) protein,the major antigen of influenza. Future versions could incorporate other proteins like NA (neuraminidase), and factors like immune history, manufacturing constraints, or dosage levels. Applying the system to other viruses would also require large, high-quality datasets that track both viral evolution and immune responses — data that aren’t always publicly available. The team, however is currently working on the methods that can predict viral evolution in low-data regimes building on relations between viral families
“Given the speed of viral evolution, current therapeutic development often lags behind. VaxSeer is our attempt to catch up,” says Regina Barzilay, the School of Engineering Distinguished Professor for AI and Health at MIT, AI lead of Jameel Clinic, and CSAIL principal investigator.
“This paper is impressive, but what excites me perhaps even more is the team’s ongoing work on predicting viral evolution in low-data settings,” says Assistant Professor Jon Stokes of the Department of Biochemistry and Biomedical Sciences at McMaster University in Hamilton, Ontario. “The implications go far beyond influenza. Imagine being able to anticipate how antibiotic-resistant bacteria or drug-resistant cancers might evolve, both of which can adapt rapidly. This kind of predictive modeling opens up a powerful new way of thinking about how diseases change, giving us the opportunity to stay one step ahead and design clinical interventions before escape becomes a major problem.”
Shi and Barzilay wrote the paper with MIT CSAIL postdoc Jeremy Wohlwend ’16, MEng ’17, PhD ’25 and recent CSAIL affiliate Menghua Wu ’19, MEng ’20, PhD ’25. Their work was supported, in part, by the U.S. Defense Threat Reduction Agency and MIT Jameel Clinic.
The VaxSeer system developed at MIT can predict dominant flu strains and identify the most protective vaccine candidates. The tool uses deep learning models trained on decades of viral sequences and lab test results to simulate how the flu virus might evolve and how the vaccines will respond. Pictured: Senior author Regina Barzilay (left) and first author Wenxian Shi.
Every year, global health experts are faced with a high-stakes decision: Which influenza strains should go into the next seasonal vaccine? The choice must be made months in advance, long before flu season even begins, and it can often feel like a race against the clock. If the selected strains match those that circulate, the vaccine will likely be highly effective. But if the prediction is off, protection can drop significantly, leading to (potentially preventable) illness and strain on health care systems.
This challenge became even more familiar to scientists in the years during the Covid-19 pandemic. Think back to the time (and time and time again), when new variants emerged just as vaccines were being rolled out. Influenza behaves like a similar, rowdy cousin, mutating constantly and unpredictably. That makes it hard to stay ahead, and therefore harder to design vaccines that remain protective.
To reduce this uncertainty, scientists at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the MIT Abdul Latif Jameel Clinic for Machine Learning in Health set out to make vaccine selection more accurate and less reliant on guesswork. They created an AI system called VaxSeer, designed to predict dominant flu strains and identify the most protective vaccine candidates, months ahead of time. The tool uses deep learning models trained on decades of viral sequences and lab test results to simulate how the flu virus might evolve and how the vaccines will respond.
Traditional evolution models often analyze the effect of single amino acid mutations independently. “VaxSeer adopts a large protein language model to learn the relationship between dominance and the combinatorial effects of mutations,” explains Wenxian Shi, a PhD student in MIT’s Department of Electrical Engineering and Computer Science, researcher at CSAIL, and lead author of a new paper on the work. “Unlike existing protein language models that assume a static distribution of viral variants, we model dynamic dominance shifts, making it better suited for rapidly evolving viruses like influenza.”
VaxSeer has two core prediction engines: one that estimates how likely each viral strain is to spread (dominance), and another that estimates how effectively a vaccine will neutralize that strain (antigenicity). Together, they produce a predicted coverage score: a forward-looking measure of how well a given vaccine is likely to perform against future viruses.
The scale of the score could be from an infinite negative to 0. The closer the score to 0, the better the antigenic match of vaccine strains to the circulating viruses. (You can imagine it as the negative of some kind of “distance.”)
In a 10-year retrospective study, the researchers evaluated VaxSeer’s recommendations against those made by the World Health Organization (WHO) for two major flu subtypes: A/H3N2 and A/H1N1. For A/H3N2, VaxSeer’s choices outperformed the WHO’s in nine out of 10 seasons, based on retrospective empirical coverage scores (a surrogate metric of the vaccine effectiveness, calculated from the observed dominance from past seasons and experimental HI test results). The team used this to evaluate vaccine selections, as the effectiveness is only available for vaccines actually given to the population.
For A/H1N1, it outperformed or matched the WHO in six out of 10 seasons. In one notable case, for the 2016 flu season, VaxSeer identified a strain that wasn’t chosen by the WHO until the following year. The model’s predictions also showed strong correlation with real-world vaccine effectiveness estimates, as reported by the CDC, Canada’s Sentinel Practitioner Surveillance Network, and Europe’s I-MOVE program. VaxSeer’s predicted coverage scores aligned closely with public health data on flu-related illnesses and medical visits prevented by vaccination.
So how exactly does VaxSeer make sense of all these data? Intuitively, the model first estimates how rapidly a viral strain spreads over time using a protein language model, and then determines its dominance by accounting for competition among different strains.
Once the model has calculated its insights, they’re plugged into a mathematical framework based on something called ordinary differential equations to simulate viral spread over time. For antigenicity, the system estimates how well a given vaccine strain will perform in a common lab test called the hemagglutination inhibition assay. This measures how effectively antibodies can inhibit the virus from binding to human red blood cells, which is a widely used proxy for antigenic match/antigenicity.
Outpacing evolution
“By modeling how viruses evolve and how vaccines interact with them, AI tools like VaxSeer could help health officials make better, faster decisions — and stay one step ahead in the race between infection and immunity,” says Shi.
VaxSeer currently focuses only on the flu virus’s HA (hemagglutinin) protein,the major antigen of influenza. Future versions could incorporate other proteins like NA (neuraminidase), and factors like immune history, manufacturing constraints, or dosage levels. Applying the system to other viruses would also require large, high-quality datasets that track both viral evolution and immune responses — data that aren’t always publicly available. The team, however is currently working on the methods that can predict viral evolution in low-data regimes building on relations between viral families
“Given the speed of viral evolution, current therapeutic development often lags behind. VaxSeer is our attempt to catch up,” says Regina Barzilay, the School of Engineering Distinguished Professor for AI and Health at MIT, AI lead of Jameel Clinic, and CSAIL principal investigator.
“This paper is impressive, but what excites me perhaps even more is the team’s ongoing work on predicting viral evolution in low-data settings,” says Assistant Professor Jon Stokes of the Department of Biochemistry and Biomedical Sciences at McMaster University in Hamilton, Ontario. “The implications go far beyond influenza. Imagine being able to anticipate how antibiotic-resistant bacteria or drug-resistant cancers might evolve, both of which can adapt rapidly. This kind of predictive modeling opens up a powerful new way of thinking about how diseases change, giving us the opportunity to stay one step ahead and design clinical interventions before escape becomes a major problem.”
Shi and Barzilay wrote the paper with MIT CSAIL postdoc Jeremy Wohlwend ’16, MEng ’17, PhD ’25 and recent CSAIL affiliate Menghua Wu ’19, MEng ’20, PhD ’25. Their work was supported, in part, by the U.S. Defense Threat Reduction Agency and MIT Jameel Clinic.
The VaxSeer system developed at MIT can predict dominant flu strains and identify the most protective vaccine candidates. The tool uses deep learning models trained on decades of viral sequences and lab test results to simulate how the flu virus might evolve and how the vaccines will respond. Pictured: Senior author Regina Barzilay (left) and first author Wenxian Shi.
After ten years, the Future Resilient Systems programme at the Singapore-ETH Centre (SEC) is drawing to a close. In our interview, Programme Director Jonas Jörin talks about the programme's successes and the future of resilience research.
Today’s electric vehicle boom is tomorrow’s mountain of electronic waste. And while myriad efforts are underway to improve battery recycling, many EV batteries still end up in landfills.
A research team from MIT wants to help change that with a new kind of self-assembling battery material that quickly breaks apart when submerged in a simple organic liquid. In a new paper published in Nature Chemistry, the researchers showed the material can work as the electrolyte in a functioning, solid-state battery cell and then revert back to its original molecular components in minutes.
The approach offers an alternative to shredding the battery into a mixed, hard-to-recycle mass. Instead, because the electrolyte serves as the battery’s connecting layer, when the new material returns to its original molecular form, the entire battery disassembles to accelerate the recycling process.
“So far in the battery industry, we’ve focused on high-performing materials and designs, and only later tried to figure out how to recycle batteries made with complex structures and hard-to-recycle materials,” says the paper’s first author Yukio Cho PhD ’23. “Our approach is to start with easily recyclable materials and figure out how to make them battery-compatible. Designing batteries for recyclability from the beginning is a new approach.”
Joining Cho on the paper are PhD candidate Cole Fincher, Ty Christoff-Tempesta PhD ’22, Kyocera Professor of Ceramics Yet-Ming Chiang, Visiting Associate Professor Julia Ortony, Xiaobing Zuo, and Guillaume Lamour.
Better batteries
There’s a scene in one of the “Harry Potter” films where Professor Dumbledore cleans a dilapidated home with the flick of the wrist and a spell. Cho says that image stuck with him as a kid. (What better way to clean your room?) When he saw a talk by Ortony on engineering molecules so that they could assemble into complex structures and then revert back to their original form, he wondered if it could be used to make battery recycling work like magic.
That would be a paradigm shift for the battery industry. Today, batteries require harsh chemicals, high heat, and complex processing to recycle. There are three main parts of a battery: the positively charged cathode, the negatively charged electrode, and the electrolyte that shuttles lithium ions between them. The electrolytes in most lithium-ion batteries are highly flammable and degrade over time into toxic byproducts that require specialized handling.
To simplify the recycling process, the researchers decided to make a more sustainable electrolyte. For that, they turned to a class of molecules that self-assemble in water, named aramid amphiphiles (AAs), whose chemical structures and stability mimic that of Kevlar. The researchers further designed the AAs to contain polyethylene glycol (PEG), which can conduct lithium ions, on one end of each molecule. When the molecules are exposed to water, they spontaneously form nanoribbons with ion-conducting PEG surfaces and bases that imitate the robustness of Kevlar through tight hydrogen bonding. The result is a mechanically stable nanoribbon structure that conducts ions across its surface.
“The material is composed of two parts,” Cho explains. “The first part is this flexible chain that gives us a nest, or host, for lithium ions to jump around. The second part is this strong organic material component that is used in the Kevlar, which is a bulletproof material. Those make the whole structure stable.”
When added to water, the nanoribbons self-assemble to form millions of nanoribbons that can be hot-pressed into a solid-state material.
“Within five minutes of being added to water, the solution becomes gel-like, indicating there are so many nanofibers formed in the liquid that they start to entangle each other,” Cho says. “What’s exciting is we can make this material at scale because of the self-assembly behavior.”
The team tested the material’s strength and toughness, finding it could endure the stresses associated with making and running the battery. They also constructed a solid-state battery cell that used lithium iron phosphate for the cathode and lithium titanium oxide as the anode, both common materials in today’s batteries. The nanoribbons moved lithium ions successfully between the electrodes, but a side-effect known as polarization limited the movement of lithium ions into the battery’s electrodes during fast bouts of charging and discharging, hampering its performance compared to today’s gold-standard commercial batteries.
“The lithium ions moved along the nanofiber all right, but getting the lithium ion from the nanofibers to the metal oxide seems to be the most sluggish point of the process,” Cho says.
When they immersed the battery cell into organic solvents, the material immediately dissolved, with each part of the battery falling away for easier recycling. Cho compared the materials’ reaction to cotton candy being submerged in water.
“The electrolyte holds the two battery electrodes together and provides the lithium-ion pathways,” Cho says. “So, when you want to recycle the battery, the entire electrolyte layer can fall off naturally and you can recycle the electrodes separately.”
Validating a new approach
Cho says the material is a proof of concept that demonstrates the recycle-first approach.
“We don’t want to say we solved all the problems with this material,” Cho says. “Our battery performance was not fantastic because we used only this material as the entire electrolyte for the paper, but what we’re picturing is using this material as one layer in the battery electrolyte. It doesn’t have to be the entire electrolyte to kick off the recycling process.”
Cho also sees a lot of room for optimizing the material’s performance with further experiments.
Now, the researchers are exploring ways to integrate these kinds of materials into existing battery designs as well as implementing the ideas into new battery chemistries.
“It’s very challenging to convince existing vendors to do something very differently,” Cho says. “But with new battery materials that may come out in five or 10 years, it could be easier to integrate this into new designs in the beginning.”
Cho also believes the approach could help reshore lithium supplies by reusing materials from batteries that are already in the U.S.
“People are starting to realize how important this is,” Cho says. “If we can start to recycle lithium-ion batteries from battery waste at scale, it’ll have the same effect as opening lithium mines in the U.S. Also, each battery requires a certain amount of lithium, so extrapolating out the growth of electric vehicles, we need to reuse this material to avoid massive lithium price spikes.”
The work was supported, in part, by the National Science Foundation and the U.S. Department of Energy. This work was performed, in part, using the MIT.nano Characterization facilities.
A depiction of batteries made with MIT researchers’ new electrolyte material, which is made from a class of molecules that self-assemble in water, named aramid amphiphiles (AAs), whose chemical structures and stability mimic Kevlar.
In World War II, Britain was fighting for its survival against German aerial bombardment. Yet Britain was importing dyes from Germany at the same time. This sounds curious, to put it mildly. How can two countries at war with each other also be trading goods?
Examples of this abound, actually. Britain also traded with its enemies for almost all of World War I. India and Pakistan conducted trade with each other during the First Kashmir War, from 1947 to 1949, and during the India-Pakistan War of 1965. Croatia and then-Yugoslavia traded with each other while fighting in 1992.
“States do in fact trade with their enemies during wars,” says MIT political scientist Mariya Grinberg. “There is a lot of variation in which products get traded, and in which wars, and there are differences in how long trade lasts into a war. But it does happen.”
Indeed, as Grinberg has found, state leaders tend to calculate whether trade can give them an advantage by boosting their own economies while not supplying their enemies with anything too useful in the near term.
“At its heart, wartime trade is all about the tradeoff between military benefits and economic costs,” Grinberg says. “Severing trade denies the enemy access to your products that could increase their military capabilities, but it also incurs a cost to you because you’re losing trade and neutral states could take over your long-term market share.” Therefore, many countries try trading with their wartime foes.
Grinberg explores this topic in a groundbreaking new book, the first one on the subject, “Trade in War: Economic Cooperation Across Enemy Lines,” published this month by Cornell University Press. It is also the first book by Grinberg, an assistant professor of political science at MIT.
Calculating time and utility
“Trade in War” has its roots in research Grinberg started as a doctoral student at the University of Chicago, where she noticed that wartime trade was a phenomenon not yet incorporated into theories of state behavior.
Grinberg wanted to learn about it comprehensively, so, as she quips, “I did what academics usually do: I went to the work of historians and said, ‘Historians, what have you got for me?’”
Modern wartime trading began during the Crimean War, which pitted Russia against France, Britain, the Ottoman Empire, and other allies. Before the war’s start in 1854, France had paid for many Russian goods that could not be shipped because ice in the Baltic Sea was late to thaw. To rescue its produce, France then persuaded Britain and Russia to adopt “neutral rights,” codified in the 1856 Declaration of Paris, which formalized the idea that goods in wartime could be shipped via neutral parties (sometimes acting as intermediaries for warring countries).
“This mental image that everyone has, that we don’t trade with our enemies during war, is actually an artifact of the world without any neutral rights,” Grinberg says. “Once we develop neutral rights, all bets are off, and now we have wartime trade.”
Overall, Grinberg’s systematic analysis of wartime trade shows that it needs to be understood on the level of particular goods. During wartime, states calculate how much it would hurt their own economies to stop trade of certain items; how useful specific products would be to enemies during war, and in what time frame; and how long a war is going to last.
“There are two conditions under which we can see wartime trade,” Grinberg says. “Trade is permitted when it does not help the enemy win the war, and it’s permitted when ending it would damage the state’s long-term economic security, beyond the current war.”
Therefore a state might export diamonds, knowing an adversary would need to resell such products over time to finance any military activities. Conversely, states will not trade products that can quickly convert into military use.
“The tradeoff is not the same for all products,” Grinberg says. “All products can be converted into something of military utility, but they vary in how long that takes. If I’m expecting to fight a short war, things that take a long time for my opponent to convert into military capabilities won’t help them win the current war, so they’re safer to trade.” Moreover, she adds, “States tend to prioritize maintaining their long-term economic stability, as long as the stakes don’t hit too close to home.”
This calculus helps explain some seemingly inexplicable wartime trade decisions. In 1917, three years into World War I, Germany started trading dyes to Britain. As it happens, dyes have military uses, for example as coatings for equipment. And World War I, infamously, was lasting far beyond initial expectations. But as of 1917, German planners thought the introduction of unrestricted submarine warfare would bring the war to a halt in their favor within a few months, so they approved the dye exports. That calculation was wrong, but it fits the framework Grinberg has developed.
States: Usually wrong about the length of wars
“Trade in War” has received praise from other scholars in the field. Michael Mastanduno of Dartmouth College has said the book “is a masterful contribution to our understanding of how states manage trade-offs across economics and security in foreign policy.”
For her part, Grinberg notes that her work holds multiple implications for international relations — one being that trade relationships do not prevent hostilities from unfolding, as some have theorized.
“We can’t expect even strong trade relations to deter a conflict,” Grinberg says. “On the other hand, when we learn our assumptions about the world are not necessarily correct, we can try to find different levers to deter war.”
Grinberg has also observed that states are not good, by any measure, at projecting how long they will be at war.
“States very infrequently get forecasts about the length of war right,” Grinberg says. That fact has formed the basis of a second, ongoing Grinberg book project.
“Now I’m studying why states go to war unprepared, why they think their wars are going to end quickly,” Grinberg says. “If people just read history, they will learn almost all of human history works against this assumption.”
At the same time, Grinberg thinks there is much more that scholars could learn specifically about trade and economic relations among warring countries — and hopes her book will spur additional work on the subject.
“I’m almost certain that I’ve only just begun to scratch the surface with this book,” she says.
In research Grinberg started as a doctoral student, she noticed that wartime trade was a phenomenon not yet incorporated into theories of state behavior.
From bustling hawker centres to iconic historical architecture, Singapore’s rich cultural heritage holds powerful potential for storytelling. In celebration of the nation’s 60th year of independence, Pitch It! 2025 shines a spotlight on the theme “The Singapore Story”. Organised annually by the NUS Communications and New Media Society (CNM Society) since 2013, Pitch It! is a nationwide competition that encourages tertiary students to unleash their creativity and inspire social change through the use of diverse forms of media.
This year’s edition, held from April to July, was organised in partnership with Mediacorp’s Bloomr.SG (a content creator network to foster creative talents), the Singapore Heritage Society, and the Singapore Film Society. Featuring the problem statement “How can we enhance the telling of The Singapore Story – celebrating its colonial landmarks, traditional music, kampong life, and hawker food – in a way that fosters unity, pride and cross-cultural appreciation?”, Pitch It! 2025 challenged participants to develop compelling advertising or social media campaigns to bring these stories to life.
Learning to tell cultural stories
The four-month journey saw 24 student teams from polytechnics and universities competing for up to S$3000 worth of cash prizes. To kickstart the creative process, participants attended two highly anticipated masterclasses.
The first, co-led by Mr Han Ming Guang from the Singapore Heritage Society and Ms Priyanka Nair from the Singapore Film Society, highlighted the importance of authenticity in presenting Singapore’s cultural narratives and the role of youth in shaping national memory. Participants were encouraged to think critically about how campaigns could move beyond nostalgia to foster inclusivity, representation, and long-term relevance. Drawing from their own experiences in heritage conservation, Mr Han and Ms Nair shared the challenges they faced in raising public awareness and offered practical insights into the strategies and campaign approaches they employed to address them.
In the second masterclass, Mr Diogo Martins and Ms Denise Tan from Mediacorp’s Bloomr.SG introduced participants to the anatomy of a successful digital campaign. Using real-life case studies, participants explored how to integrate social media trends with emotive storytelling to drive public engagement. They came away with a deeper understanding of how creativity, data, and empathy intersect to create impactful media campaigns in today’s fast-paced digital landscape. Renee Chew, a Year 2 Communications and New Media student from the NUS Faculty of Arts and Social Sciences (FASS), said, “The masterclasses were incredibly engaging and helped us to develop our campaign effectively.”
From concept to campaign
Armed with new insights, teams conducted research and fieldwork and developed campaign concepts ranging from a video series to interactive heritage trails. Following the preliminary round, five standout teams were shortlisted to present their campaigns at the Grand Finals, where they received constructive feedback from industry judges and had the opportunity to refine their proposals ahead of the final pitch.
The Pitch It! 2025 Grand Finals took place in July, with teams Ixora, Wing Wing, Need Compass and Singapore FM presenting their campaigns to a distinguished panel of industry judges comprising the speakers from the earlier masterclasses – Mr Han, Ms Nair, Mr Martins and Ms Lisa Low from Mediacorp’s Bloomr.SG.
The top prize went to Team Wing Wing for their campaign #HearOurSingaporeStory, which centred on the power of sound to connect generations. At unique phone booths located near historic sites such as the Victoria Concert Hall, Clarke Quay and the Esplanade, seniors and youths can share personal stories about Singapore’s iconic landmarks and traditions – creating more authentic and emotionally resonant stories which can be shared and appreciated across generations. Mr Han highlighted how the team had a long-term vision to sustain their campaign beyond their initial proposal and had good environmental scanning. This was one of many reasons why their pitch stood out from the rest.
“It was really fun coming up with bold, creative ideas to bring the Singapore Story to life,” shared Shernice Feng, a second-year Communication Studies student from the Nanyang Technological University’s Wee Kim Wee School of Communication and Information and a member of the winning team. “I found the experience very fulfilling as it gave us a platform to explore our own interests and perspectives.”
Also featured at the Grand Finals was a panel discussion on “Heritage, Media and Inclusive Narratives” with Mr Han, Ms Nair and Dr Jinna Tay, Senior Lecturer from the FASS CNM Department. The panel explored key challenges in heritage communication, including how to represent culture responsibly amid commercial interests, adapting to shifting media consumption patterns, sustaining long-term interest in heritage beyond one-off campaigns, and finding a balance between data-driven content strategies and meaningful storytelling.
As Pitch It! 2025 drew to a close, students had the opportunity to network with speakers and partners, engaging in meaningful conversations that extended well beyond the formal programme.
Preparing our youth for the real world
The competition not only showcased the creativity of Singapore’s youth but also provided a meaningful learning experience – connecting students with industry mentors and giving them a glimpse into the realities of campaign development.
Chieng Josiah, CNM Society Vice-President in charge of external relations and events, and Year 2 CNM Student, was proud of his organising team, which included Project Directors Year 4 CNM student Ho Ee Hsuen and Year 2 CNM student Tran Nguyen Thao Anh, for persevering through the academic year to plan this major competition.
He shared, “It has been a memorable experience planning and improving on this year’s edition to commemorate SG60 with our culture-focused and media-focused masterclasses and Grand Finals. It was also heartwarming to see our participants from various schools connect with industry experts and experienced individuals in the local cultural and heritage scene. We hope Pitch It! continues to inspire our aspiring communications professionals!”
“Pitch It! is a great opportunity for students to tackle real-world industry challenges,” noted Mr Martins. “Through the process, they gain valuable experience in aggregating insights to form coherent solutions. It also helps build their confidence – in public speaking, pitching ideas effectively, and competing at a professional level.”
By the CNM Society at the NUS Faculty of Arts and Social Sciences
Scientists fear funding cuts will slow momentum in ongoing battle with evolving bacteria
A series exploring how research is rising to major challenges in health and society
In 2023, more than 2.4 million cases of syphilis, gonorrhea, and chlamydia were diagnosed in the U.S. Though that number is high, it’s actually an improvement, according to the Centers for Disease Control and Prevention: The number of sexually transmitted infections, or STIs, decreased 1.8 percent overall from 2022 to 2023, with gonorrhea decreasing the most (7.2 percent).
But the number of STI diagnoses is only one part of the problem.
One treatment for STIs is doxycycline. It has been prescribed as a prophylactic for gonorrhea, recommended as a treatment for chlamydia since 2020, and used to treat syphilis during shortages of the preferred treatment, benzathine penicillin. But bacteria are living organisms, and like all living organisms, they evolve. Over time, they develop resistance mechanisms to the antibiotics we create to kill them. And according to Harvard immunologist Yonatan Grad, resistance to doxycycline is growing rapidly in the bacteria that cause gonorrhea.
“The increased use of doxycycline has, as we might have expected, selected for drug resistance,” Grad said.
The pattern of bacteria evolving to overcome our best treatments is one of medicine’s most fundamental problems. Since the introduction of penicillin in the 1940s, antibiotics have radically transformed what’s possible in medicine, far beyond treatments for STIs. They can knock out the bacteria behind everything from urinary tract infections to meningitis to sepsis from infected wounds. But every antibiotic faces the same fate: As soon as it enters use, bacteria begin evolving to survive it.
The scope of the problem is staggering. Doctors wrote 252 million antibiotic prescriptions in 2023 in the U.S. That’s 756 prescriptions for every 1,000 people, up from 613 per 1,000 people in 2020. According to the CDC, more than 2.8 million antimicrobial infections occur each year in the U.S., and more than 35,000 people die, as a result of antimicrobial-resistant (AMR) infections.
Veasey Conway/Harvard Staff Photographer
“I think of antibiotics as infrastructure.”
Yonatan Grad
For researchers like Grad, the endless battle against the clock can be a bit like a game of high-stakes Whac-a-Mole — tracking antibiotic resistance, figuring out how it works, and developing new kinds of drugs before the bacteria can catch up.
“Being able to treat these infections underlies so many aspects of medicine — urinary tract infections, caring for people who are immunocompromised, preventing surgical infections and treating them if they arise, and on and on,” said Grad. “This is foundational for modern clinical medicine and public health. Antibiotics are the support, the scaffolding on which medicine depends.”
Hold or release new drugs?
Grad’s research shows how quickly resistance can develop. In research described in a July letter in the New England Journal of Medicine, Grad and colleagues evaluated more than 14,000 genome sequences from Neisseria gonorrhoeae, the bacteria that causes gonorrhea, and found that carriage of a gene that confers resistance to tetracyclines — the class of antibiotics to which doxycycline belongs — shot up from 10 percent in 2020 to more than 30 percent in 2024.
Fortunately, doxycycline remains effective as a post-exposure prophylaxis for syphilis and chlamydia. It’s an open question why some pathogens are quicker to develop resistance than others. The urgency varies by organism, Grad said, with some, like Mycobacterium tuberculosis, the cause of tuberculosis, and Pseudomonas aeruginosa, showing “extremely drug-resistant or totally drug-resistant strains” that leave doctors facing untreatable infections.
The findings raise alarm bells, or at least questions, in doctors’ offices around the country: As bacteria develop resistance to tried-and-true antibiotics, when should new drugs be introduced for maximal utility before the bacteria inevitably outwit them, too? Traditional stewardship practice has recommended holding back new drugs until the old ones stop working. But 2023 research from Grad’s lab has challenged that approach. In mathematical models evaluating strategies for introducing a new antibiotic for gonorrhea, Grad found that the strategy of keeping the new antibiotics in reserve saw antibiotic resistance reach 5 percent much sooner than quickly introducing the antibiotic or using it in combination with the existing drug.
Lifesaving progress halted
Extra time could be critical for Amory Houghton Professor of Chemistry Andrew Myers, whose lab has been developing new antibiotics, including ones that target gonorrhea, for more than 30 years.
“Most of the antibiotics in our ‘modern’ arsenal are some 50 years old and no longer work against a lot of the pathogens that are emerging in hospitals and even in the community,” Myers said. “It’s a huge problem and it’s not as well appreciated as I think it should be.”
File photo by Stephanie Mitchell/Harvard Staff Photographer
“In my opinion, we can absolutely win the game — temporarily.”
Andrew Myers
Many antibiotics work by targeting and inhibiting bacterial ribosome, the central machinery that translates the instructions in RNA into a protein readout. Ribosomes are “fantastically complex” 3D shapes, Myers said. Creating new antibiotics means inventing new chemical compounds that can bind like puzzle pieces into their grooves and protrusions.
“My lab will spend quite a lot of time, sometimes years, to develop the chemistry — to invent the chemistry — that allows us to prepare new members of these classes of antibiotics,” Myers said. “And then we spend years making quite literally thousands of different members of the class, and then we evaluate them. Do they kill bacteria? Do they kill bacteria that are resistant to existing antibiotics? We’ve been incredibly successful with this, one antibiotic class after another. The strategy works.”
But it’s also in danger. The Trump administration ended a National Institutes of Health grant to Myers’ lab for the development of lincosamides, a class of antibiotics whose last approved member, clindamycin, dates to 1970. A second terminated NIH grant may kill a promising new antibiotic on the cusp of further development. Myers’ lab has created a new molecule that has proven effective in killing Klebsiella Pneumoniae and E. coli, both identified by the World Health Organization as among the highest priority pathogens. Without continued funding, the molecule may not make it to the clinical trial phase and may never become an approved drug.
“A delusion among people is that these decisions can simply be reversed and these NIH grants restored,” Myers said. “That’s not true. The damage is real, and it’s irreversible in some cases.”
Carrying on Paul Farmer’s legacy
The funding cuts extend beyond individual labs to a global health infrastructure. Carole Mitnick, a professor of global health and social medicine at Harvard Medical School, studies multidrug-resistant tuberculosis (MDR-TB) and has watched about 79 percent of USAID funding for global TB support get slashed this year.
“In the Democratic Republic of Congo, in Sierra Leone, and no doubt elsewhere, we’ve seen stocks of lifesaving anti-TB drugs sitting in warehouses, expiring, because programs that would have delivered them have been canceled or staff who would have collected them have been abruptly fired,” she said. “Not only is it immediately deadly and cruel not to deliver these lifesaving cures, but it sets the scene for more antimicrobial resistance by not delivering complete treatments. And it very clearly wastes U.S. taxpayer money to invest in the purchase of these drugs and let them sit in warehouses and expire.”
Mitnick’s work on multidrug-resistant TB, a form of antimicrobial resistance, builds on the legacy of Paul Farmer, the late Harvard professor and Partners In Health co-founder who revolutionized MDR-TB treatment by rejecting utilitarian approaches that wrote off the most vulnerable patients.
“Getting to know Paul and having him advise me, initially on my master’s thesis and ultimately on my doctoral dissertation, gave me a new framework,” Mitnick said. “It allowed me the freedom to use a social justice framework and to say that actually our research should be motivated by who’s suffering the greatest. How do we blend the research, which we’re very well placed to do at Harvard, with direct service and trying to reach the populations who are most marginalized? That shape is still very much in place and still informing the choices that several researchers in our department make in Paul’s legacy.”
Veasey Conway/Harvard Staff Photographer
“Our research should be motivated by who’s suffering the greatest.”
Carole Mitnick
Globally, about 500,000 new people are estimated to have MDR-TB or its even heartier relative, extensively drug-resistant TB, each year. MDR-TB caused an estimated 150,000 deaths worldwide in 2023. TB is the poster child for pathogen characteristics and social conditions that favor selection for drug-resistant mutants. In a single case of TB, the bacteria population comprises bacteria at different stages of growth and in different environments of the body, requiring distinct drugs that can attach to each of these forms. Multidrug treatment regimens are long (measured in months, not days) and toxic, making them difficult for people to complete. And in the absence of any incentives or requirements, there’s a long lag between developing new drugs and developing tests that can detect resistance to those drugs. Consequently, treatment is often delivered without any information about resistance, in turn generating more resistance.
The fight against MDR-TB has an unlikely new ally: Nerdfighters, the fan group of prominent video bloggers John and Hank Green — or, more specifically, a subset of that fandom calling themselves TBFighters. John Green’s 2024 book, “Everything is Tuberculosis,” raised awareness about the prohibitive cost of TB diagnostic tests.
Mitnick said that in the acknowledgments, Green called his book a sort of love letter to Paul Farmer. “Paul didn’t directly introduce John to TB, but it really is Paul’s legacy that took John Green to Sierra Leone, and then he met this young man named Henry who had multidrug-resistant tuberculosis. It awakened in John the awareness that actually TB was not a disease of the past, but a disease very much of the present.”
The TBFighters energized an existing coalition movement to reduce the cost of testing for TB and other diseases from about $10 per test to about $5 per test, based on estimates that $5 covered the cost of manufacturing plus a profit, even at lower sales volumes.
“It wasn’t until John Green and the TBFighters entered the fray in 2023 that we made any headway: The manufacturer announced a reduction of about 20 percent on the price of one TB test,” Mitnick said. “So not a full win, but a partial win.”
Despite the challenges, researchers remain cautiously optimistic. “In my opinion, we can absolutely win the game — temporarily,” said Myers. “Whatever we develop, bacteria will find a way to outwit us. But I’m optimistic that the molecules that we’re making could have a clinical lifetime of many decades, maybe even as long as 100 years, if they’re used prudently.”
Grad sees his work more like the construction crews that repair the city sidewalk or maintain bridges. “I think of antibiotics as infrastructure,” he said. “These tools that we use to maintain our health require continual investment.”
Research links by-products of steroid hormone to excessive daytime sleepiness
Jacqueline Mitchell
BIMDC Communications
3 min read
A new study sheds light on the biological underpinnings of excessive daytime sleepiness, a persistent and inappropriate urge to fall asleep during the day — during work, at meals, even mid-conversation — that interferes with daily functioning.
The findings, published in The Lancet eMedicine, open the door to exploring how nutrition, lifestyle, and environmental exposures interact with genetic and biological processes to affect alertness.
The findings add weight to the idea that excessive daytime sleepiness isn’t just the result of too little sleep.
“Recent studies identified genetic variants associated with excessive daytime sleepiness, but genetics explains only a small part of the story,” said co-corresponding author Tamar Sofer, director of Biostatistics and Bioinformatics at the Cardiovascular Institute at Beth Israel Deaconess Medical Center, and an associate professor at Harvard T.H. Chan School of Public Health and Harvard Medical School. “We wanted to identify biomarkers that can give stronger insights into the mechanisms of excessive daytime sleepiness and help explain why some people experience persistent sleepiness even when their sleep habits seem healthy.”
Investigators from Harvard-affiliated BIDMC and Brigham and Women’s Hospital turned to metabolite analysis to better understand the biology behind excessive daytime sleepiness. Metabolites are small molecules produced as the body carries out its normal functions, from synthesizing hormones, to metabolizing nutrients to clearing environmental toxins. By measuring these metabolites researchers created a profile of excessive daytime sleepiness.
The scientists analyzed blood levels of 877 metabolites in samples taken from more than 6,000 individuals in the Hispanic Community Health Study/Study of Latinos (HCHS/SOL), a long-running study sponsored by the National Institutes of Health since 2006. When they cross-referenced these data with participants’ self-reported measures of sleepiness on an official survey, investigators identified seven metabolites that were significantly linked with higher levels of excessive daytime sleepiness.
The seven metabolites turned out to be involved in the production of steroids and other biological processes already implicated in excessive daytime sleepiness. When the investigators looked only at data from male participants, an additional three metabolites were identified, suggesting there might be sex-based biological differences in how excessive daytime sleepiness manifests.
The findings add weight to the idea that excessive daytime sleepiness isn’t just the result of too little sleep but can reflect physiological circumstances that might someday be diagnosed through blood tests or treated through targeted interventions.
“As we learn what’s happening biologically, we are beginning to understand how and why EDS occurs, the early signs that someone might have it, and what we can do to help patients,” said lead author Tariq Faquih, a postdoctoral research fellow in Sofer’s lab, the lab of Heming Wang at BWH, and a fellow in medicine at HMS. “These insights could eventually lead to new strategies for preventing or managing sleep disorders that include daytime sleepiness as a major symptom.”
This research was supported in part by the National Institutes of Health and the National Institute on Aging.
Our cells produce a variety of proteins, each with a specific role that, in many cases, means that they need to be in a particular part of the cell where that role is needed. One of the ways that cells ensure certain proteins end up in the right location at the right time is through localized translation, a process that ensures that proteins are made — or translated — close to where they will be needed. MIT professor of biology and Whitehead Institute for Biomedical Research member Jonathan Weissman and colleagues have studied localized translation in order to understand how it affects cell functions and allows cells to quickly respond to changing conditions.
Now, Weissman, who is also a Howard Hughes Medical Institute Investigator, and postdoc in his lab Jingchuan Luo have expanded our knowledge of localized translation at mitochondria, structures that generate energy for the cell. In an open-access paper published today in Cell, they share a new tool, LOCL-TL, for studying localized translation in close detail, and describe the discoveries it enabled about two classes of proteins that are locally translated at mitochondria.
The importance of localized translation at mitochondria relates to their unusual origin. Mitochondria were once bacteria that lived within our ancestors’ cells. Over time, the bacteria lost their autonomy and became part of the larger cells, which included migrating most of their genes into the larger cell’s genome in the nucleus. Cells evolved processes to ensure that proteins needed by mitochondria that are encoded in genes in the larger cell’s genome get transported to the mitochondria. Mitochondria retain a few genes in their own genome, so production of proteins from the mitochondrial genome and that of the larger cell’s genome must be coordinated to avoid mismatched production of mitochondrial parts. Localized translation may help cells to manage the interplay between mitochondrial and nuclear protein production — among other purposes.
How to detect local protein production
For a protein to be made, genetic code stored in DNA is read into RNA, and then the RNA is read or translated by a ribosome, a cellular machine that builds a protein according to the RNA code. Weissman’s lab previously developed a method to study localized translation by tagging ribosomes near a structure of interest, and then capturing the tagged ribosomes in action and observing the proteins they are making. This approach, called proximity-specific ribosome profiling, allows researchers to see what proteins are being made where in the cell. The challenge that Luo faced was how to tweak this method to capture only ribosomes at work near mitochondria.
Ribosomes work quickly, so a ribosome that gets tagged while making a protein at the mitochondria can move on to making other proteins elsewhere in the cell in a matter of minutes. The only way researchers can guarantee that the ribosomes they capture are still working on proteins made near the mitochondria is if the experiment happens very quickly.
Weissman and colleagues had previously solved this time sensitivity problem in yeast cells with a ribosome-tagging tool called BirA that is activated by the presence of the molecule biotin. BirA is fused to the cellular structure of interest, and tags ribosomes it can touch — but only once activated. Researchers keep the cell depleted of biotin until they are ready to capture the ribosomes, to limit the time when tagging occurs. However, this approach does not work with mitochondria in mammalian cells because they need biotin to function normally, so it cannot be depleted.
Luo and Weissman adapted the existing tool to respond to blue light instead of biotin. The new tool, LOV-BirA, is fused to the mitochondrion’s outer membrane. Cells are kept in the dark until the researchers are ready. Then they expose the cells to blue light, activating LOV-BirA to tag ribosomes. They give it a few minutes and then quickly extract the ribosomes. This approach proved very accurate at capturing only ribosomes working at mitochondria.
The researchers then used a method originally developed by the Weissman lab to extract the sections of RNA inside of the ribosomes. This allows them to see exactly how far along in the process of making a protein the ribosome is when captured, which can reveal whether the entire protein is made at the mitochondria, or whether it is partly produced elsewhere and only gets completed at the mitochondria.
“One advantage of our tool is the granularity it provides,” Luo says. “Being able to see what section of the protein is locally translated helps us understand more about how localized translation is regulated, which can then allow us to understand its dysregulation in disease and to control localized translation in future studies.”
Two protein groups are made at mitochondria
Using these approaches, the researchers found that about 20 percent of the genes needed in mitochondria that are located in the main cellular genome are locally translated at mitochondria. These proteins can be divided into two distinct groups with different evolutionary histories and mechanisms for localized translation.
One group consists of relatively long proteins, each containing more than 400 amino acids or protein building blocks. These proteins tend to be of bacterial origin — present in the ancestor of mitochondria — and they are locally translated in both mammalian and yeast cells, suggesting that their localized translation has been maintained through a long evolutionary history.
Like many mitochondrial proteins encoded in the nucleus, these proteins contain a mitochondrial targeting sequence (MTS), a ZIP code that tells the cell where to bring them. The researchers discovered that most proteins containing an MTS also contain a nearby inhibitory sequence that prevents transportation until they are done being made. This group of locally translated proteins lacks the inhibitory sequence, so they are brought to the mitochondria during their production.
Production of these longer proteins begins anywhere in the cell, and then after approximately the first 250 amino acids are made, they get transported to the mitochondria. While the rest of the protein gets made, it is simultaneously fed into a channel that brings it inside the mitochondrion. This ties up the channel for a long time, limiting import of other proteins, so cells can only afford to do this simultaneous production and import for select proteins. The researchers hypothesize that these bacterial-origin proteins are given priority as an ancient mechanism to ensure that they are accurately produced and placed within mitochondria.
The second locally translated group consists of short proteins, each less than 200 amino acids long. These proteins are more recently evolved, and correspondingly, the researchers found that the mechanism for their localized translation is not shared by yeast. Their mitochondrial recruitment happens at the RNA level. Two sequences within regulatory sections of each RNA molecule that do not encode the final protein instead code for the cell’s machinery to recruit the RNAs to the mitochondria.
The researchers searched for molecules that might be involved in this recruitment, and identified the RNA binding protein AKAP1, which exists at mitochondria. When they eliminated AKAP1, the short proteins were translated indiscriminately around the cell. This provided an opportunity to learn more about the effects of localized translation, by seeing what happens in its absence. When the short proteins were not locally translated, this led to the loss of various mitochondrial proteins, including those involved in oxidative phosphorylation, our cells’ main energy generation pathway.
In future research, Weissman and Luo will delve deeper into how localized translation affects mitochondrial function and dysfunction in disease. The researchers also intend to use LOCL-TL to study localized translation in other cellular processes, including in relation to embryonic development, neural plasticity, and disease.
“This approach should be broadly applicable to different cellular structures and cell types, providing many opportunities to understand how localized translation contributes to biological processes,” Weissman says. “We’re particularly interested in what we can learn about the roles it may play in diseases including neurodegeneration, cardiovascular diseases, and cancers.”
Our cells produce a variety of proteins, each with a specific role that, in many cases, means that they need to be in a particular part of the cell where that role is needed. One of the ways that cells ensure certain proteins end up in the right location at the right time is through localized translation, a process that ensures that proteins are made — or translated — close to where they will be needed. MIT professor of biology and Whitehead Institute for Biomedical Research member Jonathan Weissman and colleagues have studied localized translation in order to understand how it affects cell functions and allows cells to quickly respond to changing conditions.
Now, Weissman, who is also a Howard Hughes Medical Institute Investigator, and postdoc in his lab Jingchuan Luo have expanded our knowledge of localized translation at mitochondria, structures that generate energy for the cell. In an open-access paper published today in Cell, they share a new tool, LOCL-TL, for studying localized translation in close detail, and describe the discoveries it enabled about two classes of proteins that are locally translated at mitochondria.
The importance of localized translation at mitochondria relates to their unusual origin. Mitochondria were once bacteria that lived within our ancestors’ cells. Over time, the bacteria lost their autonomy and became part of the larger cells, which included migrating most of their genes into the larger cell’s genome in the nucleus. Cells evolved processes to ensure that proteins needed by mitochondria that are encoded in genes in the larger cell’s genome get transported to the mitochondria. Mitochondria retain a few genes in their own genome, so production of proteins from the mitochondrial genome and that of the larger cell’s genome must be coordinated to avoid mismatched production of mitochondrial parts. Localized translation may help cells to manage the interplay between mitochondrial and nuclear protein production — among other purposes.
How to detect local protein production
For a protein to be made, genetic code stored in DNA is read into RNA, and then the RNA is read or translated by a ribosome, a cellular machine that builds a protein according to the RNA code. Weissman’s lab previously developed a method to study localized translation by tagging ribosomes near a structure of interest, and then capturing the tagged ribosomes in action and observing the proteins they are making. This approach, called proximity-specific ribosome profiling, allows researchers to see what proteins are being made where in the cell. The challenge that Luo faced was how to tweak this method to capture only ribosomes at work near mitochondria.
Ribosomes work quickly, so a ribosome that gets tagged while making a protein at the mitochondria can move on to making other proteins elsewhere in the cell in a matter of minutes. The only way researchers can guarantee that the ribosomes they capture are still working on proteins made near the mitochondria is if the experiment happens very quickly.
Weissman and colleagues had previously solved this time sensitivity problem in yeast cells with a ribosome-tagging tool called BirA that is activated by the presence of the molecule biotin. BirA is fused to the cellular structure of interest, and tags ribosomes it can touch — but only once activated. Researchers keep the cell depleted of biotin until they are ready to capture the ribosomes, to limit the time when tagging occurs. However, this approach does not work with mitochondria in mammalian cells because they need biotin to function normally, so it cannot be depleted.
Luo and Weissman adapted the existing tool to respond to blue light instead of biotin. The new tool, LOV-BirA, is fused to the mitochondrion’s outer membrane. Cells are kept in the dark until the researchers are ready. Then they expose the cells to blue light, activating LOV-BirA to tag ribosomes. They give it a few minutes and then quickly extract the ribosomes. This approach proved very accurate at capturing only ribosomes working at mitochondria.
The researchers then used a method originally developed by the Weissman lab to extract the sections of RNA inside of the ribosomes. This allows them to see exactly how far along in the process of making a protein the ribosome is when captured, which can reveal whether the entire protein is made at the mitochondria, or whether it is partly produced elsewhere and only gets completed at the mitochondria.
“One advantage of our tool is the granularity it provides,” Luo says. “Being able to see what section of the protein is locally translated helps us understand more about how localized translation is regulated, which can then allow us to understand its dysregulation in disease and to control localized translation in future studies.”
Two protein groups are made at mitochondria
Using these approaches, the researchers found that about 20 percent of the genes needed in mitochondria that are located in the main cellular genome are locally translated at mitochondria. These proteins can be divided into two distinct groups with different evolutionary histories and mechanisms for localized translation.
One group consists of relatively long proteins, each containing more than 400 amino acids or protein building blocks. These proteins tend to be of bacterial origin — present in the ancestor of mitochondria — and they are locally translated in both mammalian and yeast cells, suggesting that their localized translation has been maintained through a long evolutionary history.
Like many mitochondrial proteins encoded in the nucleus, these proteins contain a mitochondrial targeting sequence (MTS), a ZIP code that tells the cell where to bring them. The researchers discovered that most proteins containing an MTS also contain a nearby inhibitory sequence that prevents transportation until they are done being made. This group of locally translated proteins lacks the inhibitory sequence, so they are brought to the mitochondria during their production.
Production of these longer proteins begins anywhere in the cell, and then after approximately the first 250 amino acids are made, they get transported to the mitochondria. While the rest of the protein gets made, it is simultaneously fed into a channel that brings it inside the mitochondrion. This ties up the channel for a long time, limiting import of other proteins, so cells can only afford to do this simultaneous production and import for select proteins. The researchers hypothesize that these bacterial-origin proteins are given priority as an ancient mechanism to ensure that they are accurately produced and placed within mitochondria.
The second locally translated group consists of short proteins, each less than 200 amino acids long. These proteins are more recently evolved, and correspondingly, the researchers found that the mechanism for their localized translation is not shared by yeast. Their mitochondrial recruitment happens at the RNA level. Two sequences within regulatory sections of each RNA molecule that do not encode the final protein instead code for the cell’s machinery to recruit the RNAs to the mitochondria.
The researchers searched for molecules that might be involved in this recruitment, and identified the RNA binding protein AKAP1, which exists at mitochondria. When they eliminated AKAP1, the short proteins were translated indiscriminately around the cell. This provided an opportunity to learn more about the effects of localized translation, by seeing what happens in its absence. When the short proteins were not locally translated, this led to the loss of various mitochondrial proteins, including those involved in oxidative phosphorylation, our cells’ main energy generation pathway.
In future research, Weissman and Luo will delve deeper into how localized translation affects mitochondrial function and dysfunction in disease. The researchers also intend to use LOCL-TL to study localized translation in other cellular processes, including in relation to embryonic development, neural plasticity, and disease.
“This approach should be broadly applicable to different cellular structures and cell types, providing many opportunities to understand how localized translation contributes to biological processes,” Weissman says. “We’re particularly interested in what we can learn about the roles it may play in diseases including neurodegeneration, cardiovascular diseases, and cancers.”
The MIT School of Humanities, Arts, and Social Sciences announced leadership changes in three of its academic units for the 2025-26 academic year.
“We have an excellent cohort of leaders coming in,” says Agustín Rayo, the Kenan Sahin Dean of the School of Humanities, Arts, and Social Sciences. “I very much look forward to working with them and welcoming them into the school's leadership team.”
Sandy Alexandre will serve as head of MIT Literature. Alexandre is an associate professor of literature and served as co-head of the section in 2024-25. Her research spans the late 19th-century to present-day Black American literature and culture. Her first book, “The Properties of Violence: Claims to Ownership in Representations of Lynching,” uses the history of American lynching violence as a framework to understand matters concerning displacement, property ownership, and the American pastoral ideology in a literary context. Her work thoughtfully explores how literature envisions ecologies of people, places, and objects as recurring echoes of racial violence, resonating across the long arc of U.S. history. She earned a bachelor’s degree in English language and literature from Dartmouth College and a master’s and PhD in English from the University of Virginia.
Manduhai Buyandelger will serve as director of the Program in Women’s and Gender Studies. A professor of anthropology, Buyandelger’s research seeks to find solutions for achieving more-integrated (and less-violent) lives for humans and non-humans by examining the politics of multi-species care and exploitation, urbanization, and how diverse material and spiritual realities interact and shape the experiences of different beings. By examining urban multi-species coexistence in different places in Mongolia, the United States, Japan, and elsewhere, her study probes possibilities for co-cultivating an integrated multi-species existence. She is also developing an anthro-engineering project with the MIT Department of Nuclear Science and Engineering (NSE) to explore pathways to decarbonization in Mongolia by examining user-centric design and responding to political and cultural constraints on clean-energy issues. She offers a transdisciplinary course with NSE, 21A.S01 (Anthro-Engineering: Decarbonization at the Million Person Scale), in collaboration with her colleagues in Mongolia’s capital, Ulaanbaatar. She has written two books on religion, gender, and politics in post-socialist Mongolia: “Tragic Spirits: Shamanism, Gender, and Memory in Contemporary Mongolia” (University of Chicago Press, 2013) and “A Thousand Steps to the Parliament: Constructing Electable Women in Mongolia” (University of Chicago Press, 2022). Her essays have appeared in American Ethnologist, Journal of Royal Anthropological Association, Inner Asia, and Annual Review of Anthropology. She earned a BA in literature and linguistics and an MA in philology from the National University of Mongolia, and a PhD in social anthropology from Harvard University.
Eden Medina PhD ’05 will serve as head of the Program in Science, Technology, and Society. A professor of science, technology, and society, Medina studies the relationship of science, technology, and processes of political change in Latin America. She is the author of “Cybernetic Revolutionaries: Technology and Politics in Allende's Chile” (MIT Press, 2011), which won the 2012 Edelstein Prize for best book on the history of technology and the 2012 Computer History Museum Prize for best book on the history of computing. Her co-edited volume “Beyond Imported Magic: Essays on Science, Technology, and Society in Latin America” (MIT Press, 2014) received the Amsterdamska Award from the European Society for the Study of Science and Technology (2016). In addition to her writings, Medina co-curated the exhibition “How to Design a Revolution: The Chilean Road to Design,” which opened in 2023 at the Centro Cultural La Moneda in Santiago, Chile, and is currently on display at the design museum Disseny Hub in Barcelona, Spain. She holds a PhD in the history and social study of science and technology from MIT and a master’s degree in studies of law from Yale Law School. She worked as an electrical engineer prior to starting her graduate studies.
Joining the SHASS leadership team are (left to right) Sandy Alexandre, Manduhai Buyandelger, and Eden Medina.
Fikile R. Brushett, a Ralph Landau Professor of Chemical Engineering Practice, was named director of MIT’s David H. Koch School of Chemical Engineering Practice, effective July 1. In this role, Brushett will lead one of MIT’s most innovative and distinctive educational programs.
Brushett joined the chemical engineering faculty in 2012 and has been a deeply engaged member of the department. An internationally recognized leader in the field of energy storage, his research advances the science and engineering of electrochemical technologies for a sustainable energy economy. He is particularly interested in the fundamental processes that define the performance, cost, and lifetime of present-day and next-generation electrochemical systems. In addition to his research, Brushett has served as a first-year undergraduate advisor, as a member of the department’s graduate admissions committee, and on MIT’s Committee on the Undergraduate Program.
“Fik’s scholarly excellence and broad service position him perfectly to take on this new challenge,” says Kristala L. J. Prather, the Arthur D. Little Professor and head of the Department of Chemical Engineering (ChemE). “His role as practice school director reflects not only his technical expertise, but his deep commitment to preparing students for meaningful, impactful careers. I’m confident he will lead the practice school with the same spirit of excellence and innovation that has defined the program for generations.”
Brushett succeeds T. Alan Hatton, a Ralph Landau Professor of Chemical Engineering Practice Post-Tenure, who directed the practice school for 36 years. For many, Hatton’s name is synonymous with the program. When he became director in 1989, only a handful of major chemical companies hosted stations.
“I realized that focusing on one industry segment was not sustainable and did not reflect the breadth of a chemical engineering education,” Hatton recalls. “So I worked to modernize the experience for students and have it reflect the many ways chemical engineers practice in the modern world.”
Under Hatton’s leadership, the practice school expanded globally and across industries, providing students with opportunities to work on diverse technologies in a wide range of locations. He pioneered the model of recruiting new companies each year, allowing many more firms to participate while also spreading costs across a broader sponsor base. He also introduced an intensive, hands-on project management course at MIT during Independent Activities Period, which has become a valuable complement to students’ station work and future careers.
Value for students and industry
The practice school benefits not only students, but also the companies that host them. By embedding teams directly into manufacturing plants and R&D centers, businesses gain fresh perspectives on critical technical challenges, coupled with the analytical rigor of MIT-trained problem-solvers. Many sponsors report that projects completed by practice school students have yielded measurable cost savings, process improvements, and even new opportunities for product innovation.
For manufacturing industries, where efficiency, safety, and sustainability are paramount, the program provides actionable insights that help companies strengthen competitiveness and accelerate growth. The model creates a unique partnership: students gain true real-world training, while companies benefit from MIT expertise and the creativity of the next generation of chemical engineers.
A century of hands-on learning
Founded in 1916 by MIT chemical engineering alumnus Arthur D. Little and Professor William Walker, with funding from George Eastman of Eastman Kodak, the practice school was designed to add a practical dimension to chemical engineering education. The first five sites — all in the Northeast — focused on traditional chemical industries working on dyes, abrasives, solvents, and fuels.
Today, the program remains unique in higher education. Students consult with companies worldwide across fields ranging from food and pharmaceuticals to energy and finance, tackling some of industry’s toughest challenges. More than a hundred years after its founding, the practice school continues to embody MIT’s commitment to hands-on, problem-driven learning that transforms both students and the industries they serve.
The practice school experience is part of ChemE’s MSCEP and PhD/ScDCEP programs. After coursework for each program is completed, a student attends practice school stations at host company sites. A group of six to 10 students spends two months each at two stations; each station experience includes teams of two or three students working on a month-long project, where they will prepare formal talks, scope of work, and a final report for the host company. Recent stations include Evonik in Marl, Germany; AstraZeneca in Gaithersburg, Maryland; EGA in Dubai, UAE; AspenTech in Bedford, Massachusetts; and Shell Technology Center and Dimensional Energy in Houston, Texas.
MIT researchers have developed a technique that enables real-time, 3D monitoring of corrosion, cracking, and other material failure processes inside a nuclear reactor environment.
This could allow engineers and scientists to design safer nuclear reactors that also deliver higher performance for applications like electricity generation and naval vessel propulsion.
During their experiments, the researchers utilized extremely powerful X-rays to mimic the behavior of neutrons interacting with a material inside a nuclear reactor.
They found that adding a buffer layer of silicon dioxide between the material and its substrate, and keeping the material under the X-ray beam for a longer period of time, improves the stability of the sample. This allows for real-time monitoring of material failure processes.
By reconstructing 3D image data on the structure of a material as it fails, researchers could design more resilient materials that can better withstand the stress caused by irradiation inside a nuclear reactor.
“If we can improve materials for a nuclear reactor, it means we can extend the life of that reactor. It also means the materials will take longer to fail, so we can get more use out of a nuclear reactor than we do now. The technique we’ve demonstrated here allows to push the boundary in understanding how materials fail in real-time,” says Ericmoore Jossou, who has shared appointments in the Department of Nuclear Science and Engineering (NSE), where he is the John Clark Hardwick Professor, and the Department of Electrical Engineering and Computer Science (EECS), and the MIT Schwarzman College of Computing.
Jossou, senior author of a study on this technique, is joined on the paper by lead author David Simonne, an NSE postdoc; Riley Hultquist, a graduate student in NSE; Jiangtao Zhao, of the European Synchrotron; and Andrea Resta, of Synchrotron SOLEIL. The research was published Tuesday by the journal Scripta Materiala.
“Only with this technique can we measure strain with a nanoscale resolution during corrosion processes. Our goal is to bring such novel ideas to the nuclear science community while using synchrotrons both as an X-ray probe and radiation source,” adds Simonne.
Real-time imaging
Studying real-time failure of materials used in advanced nuclear reactors has long been a goal of Jossou’s research group.
Usually, researchers can only learn about such material failures after the fact, by removing the material from its environment and imaging it with a high-resolution instrument.
“We are interested in watching the process as it happens. If we can do that, we can follow the material from beginning to end and see when and how it fails. That helps us understand a material much better,” he says.
They simulate the process by firing an extremely focused X-ray beam at a sample to mimic the environment inside a nuclear reactor. The researchers must use a special type of high-intensity X-ray, which is only found in a handful of experimental facilities worldwide.
For these experiments they studied nickel, a material incorporated into alloys that are commonly used in advanced nuclear reactors. But before they could start the X-ray equipment, they had to prepare a sample.
To do this, the researchers used a process called solid state dewetting, which involves putting a thin film of the material onto a substrate and heating it to an extremely high temperature in a furnace until it transforms into single crystals.
“We thought making the samples was going to be a walk in the park, but it wasn’t,” Jossou says.
As the nickel heated up, it interacted with the silicon substrate and formed a new chemical compound, essentially derailing the entire experiment. After much trial-and-error, the researchers found that adding a thin layer of silicon dioxide between the nickel and substrate prevented this reaction.
But when crystals formed on top of the buffer layer, they were highly strained. This means the individual atoms had moved slightly to new positions, causing distortions in the crystal structure.
Phase retrieval algorithms can typically recover the 3D size and shape of a crystal in real-time, but if there is too much strain in the material, the algorithms will fail.
However, the team was surprised to find that keeping the X-ray beam trained on the sample for a longer period of time caused the strain to slowly relax, due to the silicon buffer layer. After a few extra minutes of X-rays, the sample was stable enough that they could utilize phase retrieval algorithms to accurately recover the 3D shape and size of the crystal.
“No one had been able to do that before. Now that we can make this crystal, we can image electrochemical processes like corrosion in real time, watching the crystal fail in 3D under conditions that are very similar to inside a nuclear reactor. This has far-reaching impacts,” he says.
They experimented with a different substrate, such as niobium doped strontium titanate, and found that only a silicon dioxide buffered silicon wafer created this unique effect.
An unexpected result
As they fine-tuned the experiment, the researchers discovered something else.
They could also use the X-ray beam to precisely control the amount of strain in the material, which could have implications for the development of microelectronics.
In the microelectronics community, engineers often introduce strain to deform a material’s crystal structure in a way that boosts its electrical or optical properties.
“With our technique, engineers can use X-rays to tune the strain in microelectronics while they are manufacturing them. While this was not our goal with these experiments, it is like getting two results for the price of one,” he adds.
In the future, the researchers want to apply this technique to more complex materials like steel and other metal alloys used in nuclear reactors and aerospace applications. They also want to see how changing the thickness of the silicon dioxide buffer layer impacts their ability to control the strain in a crystal sample.
“This discovery is significant for two reasons. First, it provides fundamental insight into how nanoscale materials respond to radiation — a question of growing importance for energy technologies, microelectronics, and quantum materials. Second, it highlights the critical role of the substrate in strain relaxation, showing that the supporting surface can determine whether particles retain or release strain when exposed to focused X-ray beams,” says Edwin Fohtung, an associate professor at the Rensselaer Polytechnic Institute, who was not involved with this work.
This work was funded, in part, by the MIT Faculty Startup Fund and the U.S. Department of Energy. The sample preparation was carried out, in part, at the MIT.nano facilities.
With three Sloan Fellows, three NSF CAREER Award winners, and a Forbes 30 Under 30 honoree among them, this year’s new faculty cohort arrives with notable accolades and ambitions.
In a groundbreaking study, researchers from the University of Bern and ETH Zurich have shown how climate change is intensifying supercell thunderstorms in Europe. At a global temperature increase of 3 degrees Celsius, these powerful storms are expected to occur more frequently, especially in the Alpine region.
His understanding of how forces like earthquakes and waves interact with buildings and bridges transformed the fields of structural reliability and risk assessment.
Solving evolutionary mystery of how humans came to walk upright
Gayani Senevirathne (left) holds the shorter, wider human pelvis, which evolved from the longer upper hipbones of primates, which Terence Capellini is displaying.
Niles Singer/Harvard Staff Photographer
Kermit Pattison
Harvard Staff Writer
6 min read
New study identifies genetic, developmental shifts that resculpted pelvis, setting ancestors apart from other primates
The pelvis is often called the keystone of upright locomotion. More than any other part of our lower body, it has been radically altered over millions of years, allowing our ancestors to become the bipeds who trekked and settled across the planet.
But just how evolution accomplished this extreme makeover has remained a mystery. Now a new study in the journal Nature led by Harvard scientists reveals two key genetic changes that remodeled the pelvis and enabled our bizarre habit of walking on two legs.
“What we’ve done here is demonstrate that in human evolution there was a complete mechanistic shift,” said Terence Capellini, professor and chair of the Department of Human Evolutionary Biology and senior author of the paper. “There’s no parallel to that in other primates. The evolution of novelty — the transition from fins to limbs or the development of bat wings from fingers — often involves massive shifts in how developmental growth occurs. Here we see humans are doing the same thing, but for their pelves.”
Anatomists have long known that the human pelvis is unique among primates. The upper hipbones, or ilia, of chimpanzees, bonobos, and gorillas — our closest relatives — are tall, narrow, and oriented flat front to back. From the side they look like thin blades. The geometry of the ape pelvis anchors large muscles for climbing.
In humans, the hipbones have rotated to the sides to form a bowl shape (in fact, the word “pelvis” derives from the Latin word for basin). Our flaring hipbones provide attachments for the muscles that allow us to maintain balance as we shift our weight from one leg to the other while walking and running.
In their new paper, the international team of researchers identified some of the key genetic and developmental shifts that radically resculpted the quadrupedal ape pelvis into a bipedal one.
“What we have tried to do is integrate different approaches to get a complete story about how the pelvis developed over time,” said Gayani Senevirathne, a postdoctoral fellow in Capellini’s lab and study lead author.
Senevirathne analyzed 128 samples of embryonic tissues from humans and nearly two dozen other primate species from museums in the U.S. and Europe. These collections included century-old specimens mounted on glass slides or preserved in jars.
The researchers also studied human embryonic tissues collected by the Birth Defects Research Laboratory at the University of Washington. They took CT scans and analyzed histology (the microscopic structure of tissues) to reveal the anatomy of the pelvis during early stages of development.
“The work that Gayani did was a tour de force,” said Capellini. “This was like five projects in one.”
The researchers discovered that evolution reshaped the human pelvis in two major steps. First, it shifted a growth plate by 90 degrees to make the human ilium wide instead of tall. Later, another shift altered the timeline of embryonic bone formation.
Most bones of the lower body take shape through a process that begins when cartilage cells form on growth plates aligned along the long axis of the growing bone. This cartilage later hardens into bone in a process called ossification.
In the early stages of development, the human iliac growth plate formed with growth aligned head-to-tail just as it did in other primates. But by day 53, the growth plates in humans evolved to radically shift perpendicularly from the original axis — thus shortening and broadening the hipbone.
“Looking at the pelvis, that wasn’t on my radar,” said Capellini. “I was expecting a stepwise progression for shortening it and then widening it. But the histology really revealed that it actually flipped 90 degrees — making it short and wide all at the same time.”
The authors suggest that these changes began with reorientation of growth plates around the time that our ancestors branched from the African apes, estimated to be between 5 million and 8 million years ago.
Another major change involved the timeline of bone formation.
Most bones form along a primary ossification center in the middle of the bone shaft.
In humans, however, the ilia do something quite different. Ossification begins in the rear of the sacrum and spreads radially. This mineralization remains restricted to the peripheral layer and ossification of the interior is delayed by 16 weeks compared to other primates — allowing the bone to maintain its shape as it grows and fundamentally changing the geometry.
“Embryonically, at 10 weeks you have a pelvis,” said Capellini as he sketched on a whiteboard. “It looks like this — basin-shaped.”
To identify the molecular forces that drove this shift, Senevirathne employed techniques such as single-cell multiomics and spatial transcriptomics. The team identified more than 300 genes at work, including three with outsized roles — SOX9 and PTH1R (controlling the growth plate shift), and RUNX2 (controlling the change in ossification).
The importance of these genes was underscored in diseases caused by their malfunction. For example, a mutation in SOX9 causes campomelic dysplasia, a disorder that results in hipbones that are abnormally narrow and lack lateral flaring.
Similarly, mutations in PTH1R cause abnormally narrow hipbones and other skeletal diseases.
The authors suggest that these changes began with reorientation of growth plates around the time that our ancestors branched from the African apes, estimated to be between 5 million and 8 million years ago.
They believe that the pelvis remained a hotspot of evolutionary change for millions of years.
As brains grew bigger, the pelvis came under another selective pressure known as the “obstetrical dilemma” — the tradeoff between a narrow pelvis (advantageous for efficient locomotion) and a wide one (facilitating the birth of big-brained babies).
They suggest that the delayed ossification probably occurred in the last 2 million years.
The oldest pelvis in the fossil record is the 4.4-million-year-old Ardipithecus from Ethiopia (a hybrid of an upright walker and tree climber with a grasping toe), and it shows hints of humanlike features in the pelvis.
The famous 3.2-million-year-old Lucy skeleton, also from Ethiopia, includes a pelvis that shows further development of bipedal traits such as flaring hip blades for bipedal muscles.
Capellini believes the new study should prompt scientists to rethink some basic assumptions about human evolution.
“All fossil hominids from that point on were growing the pelvis differently from any other primate that came before,” said Capellini. “Brain size increases that happen later should not be interpreted in a model of growth like chimpanzee and other primates. The model should be what happens in humans and hominins. The later growth of fetal head size occurred against the backdrop of a new way of new way of making the pelvis.”
This research was funded in part by the National Institutes of Health.
For the first time, chemists at ETH Zurich have successfully used extremely short, rotating flashes of light to measure and manipulate the different movements of electrons in mirror-image molecules. They showed that chirality of molecules is not just a structural but also an electronic phenomenon.
Economist’s new tool looks at how China is more effective than U.S. in exerting political power through import, export controls
Christy DeSmith
Harvard Staff Writer
6 min read
International trade can yield far more than imports and exports. According to David Y. Yang, Yvonne P. L. Lui Professor of Economics, trade can be used to wield political power.
Yang watched as China imposed trade restrictions on competitor Taiwan following a 2022 visit to the island by U.S. Speaker of the House Nancy Pelosi. A decade earlier, the arrest of a Chinese fishing boat captain in contested waters culminated with Beijing blocking exports to Japan of certain rare earth minerals, critical components for wind turbines and electric vehicles.
“Another example is China banning the import of Norwegian salmon for nearly a decade as punishment for awarding a Nobel Prize to the dissident Liu Xiaobo,” said Yang, a political economist with expertise in the East Asian superpower.
His latest working paper, co-authored with Princeton’s Ernest Liu, presents a framework for measuring how much geopolitical muscle a country can flex by threatening trade disruptions. Today, the economists find, China exerts outsized influence over trading partners while the United States has less power than expected relative to the size of its economy
“With the arrival of new data sources and empirical tools, this is something we can now study very rigorously,” Yang emphasized. “Conducting these objective, data-driven analyses feels all the more urgent in today’s global geopolitical climate.”
Their model specifically tests a set of predictions made by mid-20th-century Harvard professor Albert O. Hirschman, a German-born Jew who fled Europe during World War II. His book “National Power and the Structure of Foreign Trade” (1945) offered a theoretical account of how countries might use trade to assert geopolitical dominance.
“Hirschman viewed the issue through a positive lens,” Yang noted. “Rather than bombing each other, countries could just fight economic wars to achieve the same goals.”
Hirschman saw that trade asymmetries could be exploited. But deficits and surpluses weren’t the only relevant variables. Also important was how crucial and easily replaced the goods in question were. Halting the flow of crude oil tends to pack a far bigger punch than withholding textile exports.
“If one country becomes overly reliant on another, it might be economically efficient,” Yang explained. “But it can leave the first country vulnerable by exposing it to unfavorable power dynamics.”
Hirschman’s ideas seemed less relevant in the post-war years, with the widespread desire for increased free trade. But the book feels fresh again today, said Yang, who recently assigned it in an undergraduate economics course.
“I asked students to read the first few chapters and guess when it was written,” he recalled. “Many guessed it was last year.”
Yang and Liu set about formalizing Hirschman’s vision about three years ago, long before the current suite of aggressive U.S. tariffs. “A lot of the anecdotal examples that motivated our work came from China,” Yang said.
Indeed, their model shows China’s trade power rising over the past two decades as it turned key industries into political instruments. Chemical products, medical instruments, and electrical equipment emerged as especially potent. The country’s trade power proved larger than expected given the size of its GDP, second to the world’s largest economy by many trillions of dollars.
U.S. trading power over China declines
This figure plots the directed power (in all sectors) between the U.S. and a country for each year.
Credit: Ernest Liu and David Y. Yang
“In the early 2000s, the U.S. was able to exert more absolute power over China through trade disruptions,” said Yang, noting that findings on the U.S. were relatively stable over the 20-year period they studied.
“But things have quickly flipped,” he continued. “China now has more trade power over the U.S. and, at the moment, can exert positive power over any other entity in the world.”
China’s trading power on the rise
This figure plots the directed power (in all sectors) between China and a country for each year.
Credit: Ernest Liu and David Y. Yang
Yang and Liu also tested a pair of predictions concerning the consequences of unbalanced power. First, the economists tapped a database of millions of events involving the governments of two trading partners, confirming that negotiations and other forms of engagement increase with the asymmetries Hirschman described.
Another dataset, sourced from international opinion polls, was used to gauge bilateral geopolitical alignment over time and to verify a second predicted consequence. Yang and Liu found national leaders strategizing to build and bank trade power — by limiting imports, for example — when relations with a trading partner turned frosty due to political turnover.
“While many of the examples we give in the paper are from China, we hope to show this is a more general phenomenon,” Yang said. “Trade is a source of power any country can access.”
The paper is threaded with other insights.
“If the European Union acted as one country, it would actually be able to exercise positive power over China,” Yang said. “But individual EU members all have negative power over China. I don’t think it’s a coincidence that China typically engages with EU members bilaterally.”
What’s more, the U.S. and China are weaker against each another. The paper features a pair of maps illustrating their trade power over the rest of the world from 2001 to 2021. U.S. strength appears to peak in North America, while China’s is anchored in the Asia Pacific region.
“In terms of global power dynamics,” Yang observed, “medium-sized countries are very much the ones that get bullied.”
The results underscore a recent shift in the global trade order. For half a century following World War II, Yang said, the largest economies imported and exported with hopes of maximizing efficiency for the benefit of domestic businesses and consumers.
“What’s worrisome is that we’re starting to see the opposite,” he offered. “Trade is being restructured to take power into consideration. But in contrast with the positive-sum nature of efficiency-enhancing trade as countries produce according to their comparative advantage, power consideration in trade is negative-sum, hurting welfare on both sides.
“As we begin to painfully realize,” Yang added, “it may not be geopolitically feasible to implement efficient trade.”
Analysts highlight a school-sized gap in mental health screening
Alvin Powell
Harvard Staff Writer
4 min read
Hao Yu.
Stephanie Mitchell/Harvard Staff Photographer
As anxiety and depression persist at alarming rates among U.S. teens, less than a third of the nation’s public schools conduct mental health screenings, and a significant number of those that do say it’s hard to meet students’ needs, according to a new survey of principals.
With staffing that includes counselors and nurses, public schools are uniquely positioned to help address the youth mental health crisis declared in 2021 by the U.S. surgeon general, according to Harvard Medical School’s Hao Yu, a co-author of the study.
“Child mental health is a severe public health issue in this country,” he said. “Even before COVID, about a quarter of children had different degrees of mental health problems, and during the pandemic the problem just got worse.”
The study, published last month in JAMA Network Open, is the first since 2016 to poll public school principals on children’s mental health, said Yu, an associate professor of population medicine. The intervening years have included COVID-related disruptions, growing worries about screen time, and a surge of artificial intelligence in everyday life, he noted.
$1BCut from previously approved federal funding for school mental health support
One positive finding from the survey, which was funded with a grant from the National Institute of Mental Health, is that the percentage of U.S. public schools that screen for mental health issues has risen significantly in the past nine years, albeit from just 13 percent to 30.5 percent. The survey asked 1,019 principals three questions: Do you screen for student mental health issues? What steps are taken for students identified with anxiety or depression, two of the most common youth mental health issues? And how easy or hard it is to find adequate mental health care for students who need it?
The responses show that the most common step taken for students struggling with anxiety or depression is to notify parents — almost 80 percent of schools did that. Seventy-two percent offer in-person treatment, while about half refer to an outside mental health provider. Less than 20 percent offer telehealth treatment.
Responses to the final question highlight the challenge facing those seeking to address the problem, with 41 percent describing the task of getting care as “hard” or “very hard,” a result that Yu said, while concerning, isn’t surprising given the nationwide shortage of mental health providers.
The survey, conducted with colleagues from the Medical School, the nonpartisan research organization RAND, Brigham and Women’s Hospital, the University of Pittsburgh, the Harvard Pilgrim Health Care Institute, and Brown University, also showed that school-based screening programs are concentrated in larger schools, with 450 students or more, and in districts with larger populations of racial and ethnic minority students.
Helping young people overcome mental health challenges is a multistep process, Yu said.
“We need to make child psychiatry an attractive profession and we need to train more mid-level providers — social workers, school nurses, and counselors — because those middle-level providers play an important gatekeeper role, helping identify children with mental health problems and helping children and their families get into the healthcare system,” he said.
It’s also important, Yu said, to get policy right at all levels of government. For example, he said, even though it’s clear that meeting the challenge will require more resources, the federal government recently slashed $1 billion in previously approved school mental health funding. A potentially positive development, he said, is the nationwide trend toward restrictions on smartphone use.
“I don’t think any other institution can replace the schools in identifying and treating child mental health problems,” Yu said. “If mental health problems are treated, their severity can be greatly reduced. Mental health problems not treated in childhood can have a long-lasting effect into adulthood. That’s not an optimal situation for our society.”
The HIL building on the Hönggerberg campus is set to become a living lab. Now in need of renovation, the building will be remodelled and extended, with completion pencilled in for 2035. Professorships at ETH Zurich will engage with the project directly to research techniques and designs. Their aim is to advance sustainable redevelopment and retrofitting methods.
Following a one-year hiatus, NUS Team Bumblebee made a strong return to RoboSub 2025, reclaiming the championship title. The team had previously clinched its first championship in 2022 and successfully retained the top place in 2023.
Held from 11 to 17 August 2025 at the William Woollett Jr Aquatics Center in the City of Irvine in California, USA, RoboSub is a global competition that challenges teams to tackle real-world underwater robotics problems – from oceanographic exploration and mapping to object detection and manipulation. This year’s event brought together 58 teams from across the globe.
Team Bumblebee took a break from last year’s edition of RoboSub to focus on the Maritime RobotX Challenge, another international competition that advances autonomous robotic systems in the maritime domain. This year, the NUS team returned to RoboSub 2025 amid a record number of contenders, including Duke University, San Diego State University and Arizona State University – all of which were also finalists in this year’s competition.
After a week of intense competition – featuring technical presentations to judges and multiple rounds of pool testing - Team Bumblebee emerged as the overall champion and also swept awards for design documentation, namely Top Website, Top Video, Top Report and Top Assessment.
Team Lead Leong Deng Jun, Year 4 Computer Engineering undergraduate, reflected on the team’s effort, “I am very proud of what the team has achieved at this RoboSub. This year’s competition was especially challenging with a record number of teams participating. Many teams came better prepared, which meant we had limited testing time in the competition arena. Despite some unexpected setbacks and hardware issues, every member of our team stepped up and contributed to this victory.”
The NUS Business School marked its 60th anniversary with a birthday bash on 14 August 2025, bringing together current students, staff and past and present leaders to celebrate its journey and look to the future.
Founded in 1965 as the Department of Business Administration, it began with just 21 students recruited from the Faculty of Arts by its first leader, Dr Andrew Zecha, who personally persuaded undergraduates to join the fledgling department. It went on to lay the foundation for business education in Singapore, setting up Master of Business Administration (MBA) and Executive MBA (EMBA) programmes in partnership with schools in China and the US and earning accreditations that helped it transform into a global business school.
Today, the Business School boasts more than 6,000 students across the undergraduate and postgraduate levels and an alumni network of more than 50,000. It is the top business school in Asia and ranks among the top ten globally.
NUS President Tan Eng Chye, NUS Business School co-founder Mr Tan Yam Pin, former deans Emeritus Professor Lee Soo Ann, Professor Wee Chow Hou, Professor Hum Sin Hoon and Professor Kulwant Singh, and Deputy Dean Associate Professor Jumana Zahalka joined Distinguished Professor Andrew Rose, Dean of the Business School, for a cake-cutting ceremony to open the event.
In his opening remarks, Prof Rose thanked the School’s founders and leaders for their contributions over the past six decades, as well as the faculty and staff whose education and research work form the core of the School’s reputation. “They are the reason why we’re attracting higher and higher quality students, and our reputation continues to grow,” he said.
The event included a fully subscribed masterclass on the future of sustainability in Asia, delivered by Professor Lawrence Loh, Director of the Centre for Governance and Sustainability, and an activity-packed carnival. Attendees enjoyed traditional snacks, arcade games, art booths and a dunk tank where their attempts to dunk Prof Rose and other faculty members raised money for a good cause.
Leading in Singapore and Asia
For former Business School leaders Mr Tan, Prof Lee and Prof Wee, the 60th anniversary milestone was an opportunity to reflect on the stark changes that have taken place since their time. The School’s enrolment growth is especially impressive for Mr Tan, who was there when the initial cohort of 21 was enrolled. In comparison, the undergraduate intake in recent years has been around 4,000 students per year.
Mr Tan reflected: “In 1963, when we first conceived the concept of starting a department of business administration, it seemed like a baby step forward to me. Now looking back, it was actually a giant leap, like (Neil) Armstrong said.”
Back then, business subjects like accountancy were considered more suitable for polytechnic diplomas, rather than degrees, Prof Lee recalled. However, things changed when Singapore gained independence in 1965 and needed to train its own finance professionals.
Prof Lee said, “The teaching of accounting and business administration became essential for businesses to survive, because the accountants had returned to England. By teaching accounting here, we upgraded the capabilities of Singapore and the economy.”
He looks forward to continued innovations in how the School prepares students for the changing business environment, such as by offering more double degree programmes and leveraging new technologies like Generative AI to help students learn faster and more broadly.
Prof Wee was pleased to see that several initiatives he introduced during his tenure from 1990–1999 have become an integral part of the School’s offerings and shaped its global focus.
For instance, he was an early advocate of the modular system over the traditional year-long curriculum, as it would allow students to retake individual modules as needed and embark on exchange programmes more flexibly. The same system has now been implemented across the entire University.
To strengthen the School’s influence as an authority on Asian business, he encouraged faculty members to initiate collaborations with authors of well-known business textbooks to develop new editions with Asian case studies and contexts. In addition, Chinese MBA and EMBA programmes were launched under his leadership, offering the first such programmes outside mainland China.
A project that he would have liked to execute before he stepped down was the creation of a bilingual master’s programme, which he believes would be critical to the Business School’s mission of providing business education with an Asian lens. He still hopes that this vision will eventually come to fruition in the School’s next chapter.
“Singapore as a nation has survived because the West wants to listen to our views about China, and the Chinese come to us to know about the West,” Prof Wee said. “As a society, our strength lies in playing that bridging role. It’s important that the government is trying to cultivate bilingualism, and our universities should complement this.”
Yard brims with voices and motion, excitement and nerves, sweat and tears on move-in day
Ryan Zhou was busy moving items into his Weld Hall dorm room on Tuesday with the help of his parents and his new suitemates, Kelvin Cheung and Ronan Pell, when there was a knock on the door.
“Hi, Ryan, my name’s Hopi,” said Hopi Hoekstra, the Edgerley Family Dean of the FAS, coming into the room with some bags she had helped Zhou’s brother carry up from the car downstairs. “Welcome, we’re so happy to have you here.”
“I’m excited,” said Zhou, as he stood in the suite’s common area piled high with duffels, boxes, and bedding. “I’m excited to get started with meeting new people, making new friends, excited for all the professors and the classes.”
Harvard Yard came alive Tuesday morning as first-year students and their families unloaded cars and carried bags and boxes to the dorms in preparation for the start of their time at Harvard.
Dean Hopi Hoekstra chats with first-years Ronan Pell (left) and Kelvin Cheung as they settle in their new home in Weld Hall.
Veasey Conway/Harvard Staff Photographer
Zhou and his family drove up from their home in Ellicott City, Maryland, a few days beforehand. His father, Ning Zhou, said he’s feeling positive about the road ahead.
“I am just extremely proud of him and his years of effort,” he said. “This is his dream school. A lot of Harvard graduates told him the experience was transformative for them, so I hope that he will have a similar experience.”
“I just feel happy for him,” Zhou’s mother, Jun Gui, added. “He found the place he wants to go. I haven’t shed a tear yet.”
Welcoming the new students were President Alan Garber (second from left), joined by his wife, Anne Yahanda (far left); Faculty of Arts and Sciences Dean Hopi Hoekstra; and Dean of Harvard College David Deming.
Stephanie Mitchell/Harvard Staff Photographer
First-year Cate Frerichs with her mother, Desiree Luccio.
Stephanie Mitchell/Harvard Staff Photographer
First-year Jose Garcia helps hoist a box up a Hollis Hall stairway.
Veasey Conway/Harvard Staff Photographer
Senior Lexi Triantis takes a bubble break.
Veasey Conway/Harvard Staff Photographer
Boxes collect in a staging area outside the Science Center.
Veasey Conway/Harvard Staff Photographer
A T-shirt decorated with emblems of Harvard’s first-year Houses.
Stephanie Mitchell/Harvard Staff Photographer
By Johnston Gate, a group of upper-level students from the Crimson Key Society, holding a “Welcome to Harvard” sign, sang and danced along to Nicki Minaj and Bruno Mars songs, waving to the cars that pulled in. Outside each dorm, upper-level Peer Advising Fellows, dressed in red T-shirts, greeted new students and helped show them to their rooms
“What makes move-in day so special?” Hoekstra said. “Three things: Experiencing the energy that our returning students bring to welcoming new first-years to the Harvard community. Meeting proud, and sometimes nervous, parents who have traveled from around the globe. Watching new friendships form among roommates meeting for the first time — ones that often not only last for four years at Harvard but across lifetimes.”
“A lot of Harvard graduates told him the experience was transformative for them, so I hope that he will have a similar experience.”
Ning Zhou, about son Ryan
Leila Holland and her parents, Keisha and Jaime Holland, from Long Beach, California, took it all in as they paused outside the key distribution tent in the center of the green. Leila, who had just picked up her ID and register book, said she was looking forward to seeing her Hollis Hall room.
“I’m a little nervous, but I’m really excited to be part of a new community,” she said.
Jaime Holland said he knows this will be a time of changes.
“Just the discovery process, as she figures out what she wants to do and the kind of person she wants to be,” he said. “This is a great place to do it.”
Veasey Conway/Harvard Staff Photographer
David Deming, Danoff Dean of Harvard College, made his way between the parked cars, cheerfully accepting a black rolling suitcase and a pink wall sign from a family’s car, and leading the way to Weld.
“Move-in day is one of my very favorite days of the year at Harvard,” Deming said. “There is so much positive energy and excitement and anticipation. I feel that, too, in my first year as dean. It’s great to be able to help new students move in and feel the positive energy with them.”
Outside Grays Hall, Harvard President Alan Garber and his wife, Anne Yahanda, chatted with parents, swapping stories and recalling what it felt like to drop their own children at college.
“For everyone here, all the hard work, everything they’ve done — it’s just such an accomplishment and dream.”
Desiree Luccio
For most parents, move-in day prompts complicated emotions.
Desiree Luccio couldn’t help tearing up as she spoke about moving her daughter, Cate Frerichs, into Wigglesworth Hall. The two wore matching red Harvard sweatshirts.
“I didn’t cry at graduation, but now it’s hitting me,” Luccio said. “For everyone here, all the hard work, everything they’ve done — it’s just such an accomplishment and dream.”
For her part, Frerichs was particularly looking forward to being a student athlete — she will be a coxswain on the men’s heavyweight rowing team.
“I guess I’m nervous and excited,” Frerichs said. “I’ve met my roommates, and I’m excited to start living with them and to meet everyone.”
The grants will support new editions of ancient Babylonian literature, workshops for educators on digital methods and resources, and the definitive scholarly edition of Jefferson’s correspondence and papers.
Global concerns rising about erosion of academic freedom
New paper suggests threats are more widespread, less obvious than some might think
Christina Pazzanese
Harvard Staff Writer
8 min read
Political and social changes in the U.S. and other Western democracies in the 21st century have triggered growing concerns about possible erosion of academic freedom.
In the past, colleges and universities largely decided whom to admit and hire, what to teach, and which research to support. Increasingly, those prerogatives are being challenged.
In a new working paper, Pippa Norris, the Paul F. McGuire Lecturer in Comparative Politics at Harvard Kennedy School, looked at academic freedom and found it faces two very different but dangerous threats. In this edited conversation, Norris discusses the lasting effects these threats can have on institutions and scholars.
How is academic freedom defined here and how is it being weakened?
Traditional claims of academic freedom suggest that as a profession requiring specialist skills and training like lawyers or physicians, universities and colleges should be run collectively as self-governing bodies.
Thus, on the basis of their knowledge and expertise in their discipline and subfield, scholars should decide which colleagues to hire and promote, what should be taught in the classroom curriculum, which students should be selected and how they should be assessed, and what research should be funded and published.
Constraints on this process from outside authorities no matter how well-meaning can be regarded as problematic for the pursuit of knowledge.
Encroachments on academic freedom can arise for many different reasons. For example, the criteria used for state funding of public institutions of higher education commonly prioritize certain types of research programs over others. Personnel policies, determined by laws, set limits on hiring and firing practices in any organization. Donors also prioritize support for certain initiatives. Academic disciplines favor particular methodological techniques and analytical approaches. And so on.
Therefore, even in the most liberal societies, academic institutions and individual scholars are never totally autonomous, especially if colleges are publicly funded.
But nevertheless, the classical argument is that a large part of university and college decision-making processes, and how they work, should ideally be internally determined, by processes of scholarly peer review, not externally controlled, by educational authorities in government.
You say academic freedom faces threats on two fronts, external and internal. Can you explain?
Much of the human rights community has been concerned primarily about external threats to academic freedom. Hence, international agencies like UNESCO, Amnesty International, and Scholars at Risk, and domestic organizations like the American Association of University Professors, are always critical of government constraints on higher education like limits to free speech and the persecution of academic dissidents, particularly in the most repressive authoritarian societies.
In America, much recent concern has focused on states such as Florida and Texas, and the way in which lawmakers have intervened in appointments to the board of governors or changed the curriculum through legislation.
But, in fact, the government has always played a role, even in private universities. Think about sex discrimination, think about Title IX, think about all the ways in which we’ve legislated to try to improve, for example, diversity. That wasn’t accidental. That was a liberal attempt to try to make universities more inclusive and have a wider range of people coming in through social mobility.
So, we can’t think this all just happened because of Trump. It hasn’t. It’s a much larger process, and it’s not simply America. In all democracies, official bodies in the federal or state government, whichever party is in power, generally regulate employment conditions, university accreditation, curriculum standards, student grants and loans, and so on and so forth, and so it’s going to do that for colleges and universities in the U.S., as well.
Academic freedom is also at risk from internal processes within higher education, especially informal norms and values embedded in academic culture. Those can exist in any organization.
In academic life, surveys of academics since the 1950s have commonly documented a general liberal bias (broadly defined) amongst the majority of scholars, where the proportion of conservatives has usually been a heterodox minority.
This bias comes from a variety of different sources: It’s partly self-selection, a matter of who chooses to go into academic life versus going into the private sector careers. But is also internally reinforced — a matter of who gets selected, appointed, promoted, and who gets research grants and publications. There are lots of different ways people have to conform to the social norms of the workplace and within their discipline.
Those cultural norms are tacit. The problem is that if you don’t follow the norms, there may be a financial penalty — you don’t get promoted, or you don’t get that extra step in your grant and your award.
But they may also be just informal pressures of collegiality, friendship, and social networks. People don’t want to offend so they seek to fit in with their colleagues, department, or institution. As a result, heterodox minorities may well decide to “self-censor,” to decline from speaking up in dissent with the prevailing community.
The result is to accentuate the liberal bias, since criticisms of prevailing orthodoxies are not even expressed or heard in debate. Thus, many holding orthodox views shared by the majority in departmental meetings, appointment boards, or classroom seminars may believe that there is discussion open to all viewpoints, but silence should not be taken as tacit agreement if minority dissidents silently feel unable to speak up.
The mere perception that academic freedom is in decline increases people’s tendency to self-censor, according to the paper. Why is that?
Liberals often feel that there is no self-censorship, and there is no problem in academe, that everybody is free to speak their opinion, and that they welcome diversity in the classroom, they welcome diversity in the department, and things like that.
The problem is that if you’re in a minority and in particular right now the conservative minority, then you feel you can’t immediately speak up on a number of issues, which might offend your colleagues or might have material problems for your career.
If you’re a student and you have a heterodox view, you might feel that you won’t be popular, you won’t be invited to the parties, and you won’t have all those social networks which are a really important part of why people go to college. So, there’s this informal penalty.
Liberals don’t sense it because when they are discussing things, they think there are a variety of different views, but they may well be antithetical. They don’t even hear the criticisms of their views because those who are in the minority don’t want to speak up.
The minority can be defined in lots of different ways. It’s not simply one ideology. There are multiple viewpoints in any subject discipline. But there’s a particular way of looking at these things within a discipline, which sets the agenda, which also affects textbooks and affects the classroom, and, in fact, affects the informal culture.
You found that endorsements of strong pro-academic freedom values predict the willingness of scholars to speak out even when it differs from popular opinion. What did you mean?
Think about the people who are standing up for Harvard right now or standing up for any institution or any other unpopular view. A strong liberal is somebody who follows the John Stuart Mill argument, which is that the only way you know your argument is to know the opponent’s and to be able to act like a prosecutor in which you can put the argument on both sides. I try to use this as a pedagogy in my own classes.
People who believe in academic freedom are largely in the more liberal democracies, the Western democracies of the world. In many countries, they don’t have those luxuries.
In China, you’re not going to be speaking up against the Communist Party. It’s about what can you say and when can you say it — being sensitive to the silences and what generates the silence. And how do you ask a question, which is not going to belittle somebody and is not going to make them feel small, but you’re taking them seriously when you don’t agree with them.
The most important finding from my research evidence is that if you’re working and living in a country with more institutional constraints and less legal freedom, you’re also more likely to suppress your own views.
You can think of it as an embedded model like a Russian nesting doll. The internal group is limiting your willingness to speak up; the external is about the punishments you face if you do speak up. The two interact, obviously, but the informal norms are the subtlest things, which will keep you quiet.
Crime and public safety are among the most pressing concerns across communities in the United States. Violence fractures lives and carries staggering costs; the economic burden of gun violence alone tops $100 billion each year. More than 5 million people live under supervision through incarceration, probation, or parole, while countless more experience the collateral consequences of arrests and criminal charges. Achieving lasting public safety requires confronting both crime itself and the collateral consequences of the U.S. criminal justice system.
To help meet these dual challenges, J-PAL North America — a regional office of MIT’s Abdul Latif Jameel Poverty Action Lab (J-PAL) — with generous grant support from Arnold Ventures, launched the Initiative for Effective US Crime Policy (IECP). This initiative will generate rigorous evidence on strategies to make communities safer, reduce discrimination, and improve outcomes at every stage of the criminal justice process.
“There are a lot of open questions. We desperately need to be trying new solutions, but we need to try them in a way that enables us to learn whether they work,” notes Jennifer Doleac, executive vice president of criminal justice at Arnold Ventures. “There is a path forward for us to step up and make a concerted effort to make sure that we are being very strategic in how we spend our time and where we are directing our resources.”
Building on more than a decade of pioneering randomized evaluations, J‑PAL North America’s IECP will fund rigorous new studies in the criminal justice space, offer hands-on technical assistance, and connect researchers with practitioners. By reviewing both established and emerging evidence, the initiative will also help decision-makers focus resources on interventions that demonstrably improve public safety.
“Through this initiative, we aim to expand the use of rigorous existing evidence and help scale interventions that are proven to improve outcomes, from prevention to reintegration,” says Sara Heller, associate professor of economics at the University of Michigan and co-chair of IECP. “At the same time, IECP seeks to fill critical gaps in the evidence base by supporting new research on what works to improve the criminal justice system in the United States.”
A platform for collaboration
In June at the MIT Museum, IECP convened over 70 researchers, policymakers, and practitioners to identify research priorities and catalyze collaboration. Speakers explored the structural drivers of violence, effective pathways for translating evidence into policy, and strategies for establishing successful partnerships between researchers and practitioners.
Speakers also reflected on the value and limits of existing evidence and discussed areas in which randomized evaluations can help address the most pressing questions. Randomized evaluations have contributed powerful insights in areas such as summer youth employment programs, reminders to increase court appearances, hot-spot policing, and the use of body-worn cameras. Yet many important questions remain unanswered.
“We know randomized evaluations can answer hard policy questions, but only if we ask the right questions, with the right lens, at the right scale,” says Amanda Agan, associate professor at Cornell University and co-chair of IECP. “This convening was a call to push further: to design studies that are not only rigorous, but also relevant to the lived experiences of communities and the structural forces that shape public safety."
How to take part
Are you a practitioner with a promising idea in the criminal justice space, a policymaker planning a new program, a researcher developing a real-world intervention, or a funder investing in rigorous empirical evidence? IECP supports research partnerships to advance scalable, evidence-based solutions in the criminal legal system by funding impact evaluations, connecting researchers and practitioners, and supporting the design of randomized evaluations and the dissemination of evidence.
The new Initiative for Effective US Crime Policy recently convened over 70 researchers, policymakers, and practitioners to identify research priorities and catalyze collaboration.
Crime and public safety are among the most pressing concerns across communities in the United States. Violence fractures lives and carries staggering costs; the economic burden of gun violence alone tops $100 billion each year. More than 5 million people live under supervision through incarceration, probation, or parole, while countless more experience the collateral consequences of arrests and criminal charges. Achieving lasting public safety requires confronting both crime itself and the collateral consequences of the U.S. criminal justice system.
To help meet these dual challenges, J-PAL North America — a regional office of MIT’s Abdul Latif Jameel Poverty Action Lab (J-PAL) — with generous grant support from Arnold Ventures, launched the Initiative for Effective US Crime Policy (IECP). This initiative will generate rigorous evidence on strategies to make communities safer, reduce discrimination, and improve outcomes at every stage of the criminal justice process.
“There are a lot of open questions. We desperately need to be trying new solutions, but we need to try them in a way that enables us to learn whether they work,” notes Jennifer Doleac, executive vice president of criminal justice at Arnold Ventures. “There is a path forward for us to step up and make a concerted effort to make sure that we are being very strategic in how we spend our time and where we are directing our resources.”
Building on more than a decade of pioneering randomized evaluations, J‑PAL North America’s IECP will fund rigorous new studies in the criminal justice space, offer hands-on technical assistance, and connect researchers with practitioners. By reviewing both established and emerging evidence, the initiative will also help decision-makers focus resources on interventions that demonstrably improve public safety.
“Through this initiative, we aim to expand the use of rigorous existing evidence and help scale interventions that are proven to improve outcomes, from prevention to reintegration,” says Sara Heller, associate professor of economics at the University of Michigan and co-chair of IECP. “At the same time, IECP seeks to fill critical gaps in the evidence base by supporting new research on what works to improve the criminal justice system in the United States.”
A platform for collaboration
In June at the MIT Museum, IECP convened over 70 researchers, policymakers, and practitioners to identify research priorities and catalyze collaboration. Speakers explored the structural drivers of violence, effective pathways for translating evidence into policy, and strategies for establishing successful partnerships between researchers and practitioners.
Speakers also reflected on the value and limits of existing evidence and discussed areas in which randomized evaluations can help address the most pressing questions. Randomized evaluations have contributed powerful insights in areas such as summer youth employment programs, reminders to increase court appearances, hot-spot policing, and the use of body-worn cameras. Yet many important questions remain unanswered.
“We know randomized evaluations can answer hard policy questions, but only if we ask the right questions, with the right lens, at the right scale,” says Amanda Agan, associate professor at Cornell University and co-chair of IECP. “This convening was a call to push further: to design studies that are not only rigorous, but also relevant to the lived experiences of communities and the structural forces that shape public safety."
How to take part
Are you a practitioner with a promising idea in the criminal justice space, a policymaker planning a new program, a researcher developing a real-world intervention, or a funder investing in rigorous empirical evidence? IECP supports research partnerships to advance scalable, evidence-based solutions in the criminal legal system by funding impact evaluations, connecting researchers and practitioners, and supporting the design of randomized evaluations and the dissemination of evidence.
The new Initiative for Effective US Crime Policy recently convened over 70 researchers, policymakers, and practitioners to identify research priorities and catalyze collaboration.
Will Burke (from left)with Zach Galifianakis and Jimmy Kimmel.
Randy Holmes/ABC
Anna Lamb
Harvard Staff Writer
7 min read
‘Jimmy Kimmel Live!’ writer Will Burke on taking risks in comedy and why getting laughs is worth near-constant rejection
A series exploring how risk shapes our decisions.
Imagine walking a tightrope. Your goal is to get to the other side without falling. Below you — certain death. Well, maybe not death. Maybe there’s a net to catch you, but it’s not a very soft net, and falling into it will certainly not feel good. That, says Will Burke, alumnus of Harvard College and nearly two-decade veteran staff writer, now director, for “Jimmy Kimmel Live!,” is what trying to be funny is like.
“The second you walk out on stage or you start to tell a joke, you’re walking a tightrope,” Burke said. “You’re betting on your timing, your point of view, and sometimes you’re putting your dignity on the line in the hopes that people will laugh.”
Making people laugh, both on stage and off, has been a lifelong pursuit for Burke ’99. His comedy career started as a class clown in the hallways of the New England prep schools where his father was a teacher, and continued on stage at Harvard with the improv group On Thin Ice and the Shakespeare troupe he helped found. Then it blossomed in Los Angeles, practicing with improv groups like The Groundlings and auditioning for acting gigs.
And while a career spent trying to be funny sounds like a dream for many, Burke said it’s actually been quite risky. There’s the risk of putting yourself out there creatively, the risk of crossing a line with a joke, and then, of course, the risk of not “making it” as a funny guy full-time.
“The biggest risk was taking my Harvard diploma in one hand and trading the ivory towers of Harvard for the dive bars of Hollywood,” Burke said. “I was turning my back on the pedigree and the connections.”
Burke knows a Harvard degree can get you far. But, he said, when he moved to Los Angeles after graduation in 1999, he also knew it wouldn’t get him on TV. He’d have to do the same open mics, auditions, and acting classes the rest of the aspiring comedians in LA were doing. And in the meantime, he’d be a bartender slash tutor slash cater-waiter slash comedian.
“I suppose in some ways, you could say for a Harvard grad it’s less risky to go try to do this thing, because if it doesn’t work out you’ve still got a Harvard diploma, and some doors will open to you in a different field. But once you’re 10 years in, 15 years in, starting over in a totally different career is risky too,” he said.
And 10 years, Burke said, would be all he gave it before accepting defeat and going back to the East Coast.
“As an actor, it took me, like, 150 auditions before I booked my first thing,” Burke said. “And at this point I had become a little jaded. I was like, ‘This is so annoying. I don’t even want this commercial. This is a terrible Taco Bell ad, who cares?’ And when you don’t care, then they’re like, ‘Oh, that guy’s great. He doesn’t care. He doesn’t need this job.’ They feel it. And so that taught me a lot.”
“You’re betting on your timing, your point of view, and sometimes you’re putting your dignity on the line in the hopes that people will laugh.”
Besides booking some commercials, and some small roles on TV, after six years of auditioning and being rejected, Burke was offered a job back in Boston, working for a bank. He had a baby on the way, rising rent, and an income being stitched together through various odd jobs.
“I essentially, verbally accepted a job — I went down to HR and they photocopied my driver’s license and gave me the 401K package, what it would look like, and that whole thing. And I was like, ‘This feels like the most responsible thing to do. I have mouths to feed.’ And I could still scratch the itch in comedy clubs in Boston on the weekends, if I wanted. I kept trying to give myself a pep talk that I felt good about this — having a steady paycheck and a guaranteed career.”
Fate, said Burke, had other plans.
“Shortly thereafter, I flew back to LA and I got offered a job writing for ‘Jimmy Kimmel Live!’ And thank God I did. That was 19 years ago, and I’ve been there ever since.”
Since landing “Kimmel,” Burke said every day on the job, trying to be funny, is a risk.
“There were stressful days where I was convinced I was getting fired,” he said. “You’d see other writers get fired. I was like, ‘Oh, he’s not pitching stuff. Jimmy doesn’t like his stuff or her stuff,’ and then the next thing you know, that guy’s desk is empty. That’s real-world risk. There’s a lot of pressure to continue to produce stuff that lands and you’re trying to hit this moving target — the stuff that was making Jimmy laugh last week, he’s over it. Now that’s played out. Humor is like that.”
“It’s a dream job. It’s what I envisioned doing when I was a little kid, and I’d see ‘Saturday Night Live,’ or even ‘The Muppet Show.’ The idea of, there’s a show going on, and there’s insanity backstage, and there’s a Stormtrooper and free chickens and Gonzo and things are crashing and the show must go on.”
Asked about how he deals with near-constant rejection in the office, Burke said your feelings are always on the line.
“It’s impossible to not take things personally,” he said. But he added, there’s a trick to avoid getting too hurt.
“You walk into the room convinced that you are the absolute only person who could ever play this role, and you do your audition, and as soon as they say, ‘Thank you so much,’ you walk out of that room convinced you will never hear from them again and that you didn’t get it, so that you’re not disappointed. And it’s this weird game you play with yourself. Extrapolating that to the writers’ room as you’re pitching a joke, you stop caring what people think, because your nerve endings get frayed.”
In his personal life, Burke says his approach to humor errs on the risky side.
“Comedy can disarm tension. It can bridge divides. It can humanize a room, especially when you’re an underdog or an outsider,” he said. “Sometimes telling a dirty joke at a fancy dinner party is like, ‘Oh, we’re going there. Everyone loves a dirty joke, and now we’re all sharing dirty jokes, and it’s OK. This is an R-rated dinner.’”
But of course, there’s always the risk of the joke going too far. In a fictionalized scenario that definitely wasn’t him, he lays out the rule of time and place.
“Sometimes, in doing a joke, it goes too far, and you learn from it, but you have to go too far sometimes to know where the line is,” he said. “I know you thought it was super funny to come downstairs wearing a bra on your head at the party, but we’re at my friend’s house, and that’s his girlfriend’s bra, and you don’t know them.”
But overall, the chance of being funny, Burke said, well outweigh the risks of being embarrassed, or falling off the tightrope.
“It’s a dream job,” he said. “It’s what I envisioned doing when I was a little kid, and I’d see ‘Saturday Night Live,’ or even ‘The Muppet Show.’ The idea of, there’s a show going on, and there’s insanity backstage, and there’s a Stormtrooper and free chickens and Gonzo and things are crashing and the show must go on.”
ETH Zurich researchers have found the holy grail of brewing: the formula for stable beer foam. But it's not just breweries that will benefit from these findings.
Environmental scientists are increasingly using enormous artificial intelligence models to make predictions about changes in weather and climate, but a new study by MIT researchers shows that bigger models are not always better.
The team demonstrates that, in certain climate scenarios, much simpler, physics-based models can generate more accurate predictions than state-of-the-art deep-learning models.
Their analysis also reveals that a benchmarking technique commonly used to evaluate machine-learning techniques for climate predictions can be distorted by natural variations in the data, like fluctuations in weather patterns. This could lead someone to believe a deep-learning model makes more accurate predictions when that is not the case.
The researchers developed a more robust way of evaluating these techniques, which shows that, while simple models are more accurate when estimating regional surface temperatures, deep-learning approaches can be the best choice for estimating local rainfall.
They used these results to enhance a simulation tool known as a climate emulator, which can rapidly simulate the effect of human activities onto a future climate.
The researchers see their work as a “cautionary tale” about the risk of deploying large AI models for climate science. While deep-learning models have shown incredible success in domains such as natural language, climate science contains a proven set of physical laws and approximations, and the challenge becomes how to incorporate those into AI models.
“We are trying to develop models that are going to be useful and relevant for the kinds of things that decision-makers need going forward when making climate policy choices. While it might be attractive to use the latest, big-picture machine-learning model on a climate problem, what this study shows is that stepping back and really thinking about the problem fundamentals is important and useful,” says study senior author Noelle Selin, a professor in the MIT Institute for Data, Systems, and Society (IDSS) and the Department of Earth, Atmospheric and Planetary Sciences (EAPS), and director of the Center for Sustainability Science and Strategy.
Selin’s co-authors are lead author Björn Lütjens, a former EAPS postdoc who is now a research scientist at IBM Research; senior author Raffaele Ferrari, the Cecil and Ida Green Professor of Oceanography in EAPS and co-director of the Lorenz Center; and Duncan Watson-Parris, assistant professor at the University of California at San Diego. Selin and Ferrari are also co-principal investigators of the Bringing Computation to the Climate Challenge project, out of which this research emerged. The paper appears today in the Journal of Advances in Modeling Earth Systems.
Comparing emulators
Because the Earth’s climate is so complex, running a state-of-the-art climate model to predict how pollution levels will impact environmental factors like temperature can take weeks on the world’s most powerful supercomputers.
Scientists often create climate emulators, simpler approximations of a state-of-the art climate model, which are faster and more accessible. A policymaker could use a climate emulator to see how alternative assumptions on greenhouse gas emissions would affect future temperatures, helping them develop regulations.
But an emulator isn’t very useful if it makes inaccurate predictions about the local impacts of climate change. While deep learning has become increasingly popular for emulation, few studies have explored whether these models perform better than tried-and-true approaches.
The MIT researchers performed such a study. They compared a traditional technique called linear pattern scaling (LPS) with a deep-learning model using a common benchmark dataset for evaluating climate emulators.
Their results showed that LPS outperformed deep-learning models on predicting nearly all parameters they tested, including temperature and precipitation.
“Large AI methods are very appealing to scientists, but they rarely solve a completely new problem, so implementing an existing solution first is necessary to find out whether the complex machine-learning approach actually improves upon it,” says Lütjens.
Some initial results seemed to fly in the face of the researchers’ domain knowledge. The powerful deep-learning model should have been more accurate when making predictions about precipitation, since those data don’t follow a linear pattern.
They found that the high amount of natural variability in climate model runs can cause the deep learning model to perform poorly on unpredictable long-term oscillations, like El Niño/La Niña. This skews the benchmarking scores in favor of LPS, which averages out those oscillations.
Constructing a new evaluation
From there, the researchers constructed a new evaluation with more data that address natural climate variability. With this new evaluation, the deep-learning model performed slightly better than LPS for local precipitation, but LPS was still more accurate for temperature predictions.
“It is important to use the modeling tool that is right for the problem, but in order to do that you also have to set up the problem the right way in the first place,” Selin says.
Based on these results, the researchers incorporated LPS into a climate emulation platform to predict local temperature changes in different emission scenarios.
“We are not advocating that LPS should always be the goal. It still has limitations. For instance, LPS doesn’t predict variability or extreme weather events,” Ferrari adds.
Rather, they hope their results emphasize the need to develop better benchmarking techniques, which could provide a fuller picture of which climate emulation technique is best suited for a particular situation.
“With an improved climate emulation benchmark, we could use more complex machine-learning methods to explore problems that are currently very hard to address, like the impacts of aerosols or estimations of extreme precipitation,” Lütjens says.
Ultimately, more accurate benchmarking techniques will help ensure policymakers are making decisions based on the best available information.
The researchers hope others build on their analysis, perhaps by studying additional improvements to climate emulation methods and benchmarks. Such research could explore impact-oriented metrics like drought indicators and wildfire risks, or new variables like regional wind speeds.
This research is funded, in part, by Schmidt Sciences, LLC, and is part of the MIT Climate Grand Challenges team for “Bringing Computation to the Climate Challenge.”
Simple climate prediction models can outperform deep-learning approaches when predicting future temperature changes, but deep learning has potential for estimating more complex variables like rainfall, according to an MIT study.
While sharing a single cup of coffee, Raul Radovitzky, the Jerome C. Hunsaker Professor in the Department of Aeronautics and Astronautics, and his wife Flavia Cardarelli, senior administrative assistant in the Institute for Data, Systems, and Society, recently discussed the love they have for their “nighttime jobs” living in McCormick Hall as faculty heads of house, and explained why it is so gratifying for them to be a part of this community.
The couple, married for 32 years, first met playing in a sandbox at the age of 3 in Argentina (but didn't start dating until they were in their 20s). Radovitzky has been a part of the MIT ecosystem since 2001, while Cardarelli began working at MIT in 2006. They became heads of house at McCormick Hall, the only all-female residence hall on campus, in 2015, and recently applied to extend their stay.
“Our head-of-house role is always full of surprises. We never know what we’ll encounter, but we love it. Students think we do this just for them, but in truth, it’s very rewarding for us as well. It keeps us on our toes and brings a lot of joy,” says Cardarelli. “We like to think of ourselves as the cool aunt and uncle for the students,” Radovitzky adds.
Heads of house at MIT influence many areas of students’ development by acting as advisors and mentors to their residents. Additionally, they work closely with the residence hall’s student government, as well as staff from the Division of Student Life, to foster their community’s culture.
Vice Chancellor for Student Life Suzy Nelson explains, “Our faculty heads of house have the long view at MIT and care deeply about students’ academic and personal growth. We are fortunate to have such dedicated faculty who serve in this way. The heads of house enhance the student experience in so many ways — whether it is helping a student with a personal problem, hosting Thanksgiving dinner for students who were not able to go home, or encouraging students to get involved in new activities, they are always there for students.”
“Our heads of house help our students fully participate in residential life. They model civil discourse at community dinners, mentor and tutor residents, and encourage residents to try new things. With great expertise and aplomb, they formally and informally help our students become their whole selves,” says Chancellor Melissa Nobles.
“I love teaching, I love conducting research with my group, and I enjoy serving as a head of house. The community aspect is deeply meaningful to me. MIT has become such a central part of our lives. Our kids are both MIT graduates, and we are incredibly proud of them. We do have a life outside of MIT — weekends with friends and family, personal activities — but MIT is a big part of who we are. It’s more than a job; it’s a community. We live on campus, and while it can be intense and demanding, we really love it,” says Radovitzky.
Jessica Quaye ’20, a former resident of McCormick Hall, says, “what sets McCormick apart is the way Raul and Flavia transform the four dorm walls into a home for everyone. You might come to McCormick alone, but you never leave alone. If you ran into them somewhere on campus, you could be sure that they would call you out and wave excitedly. You could invite Raul and Flavia to your concerts and they would show up to support your extracurricular endeavors. They built an incredible family that carries the fabric of MIT with a blend of academic brilliance, a warm open-door policy, and unwavering support for our extracurricular pursuits.”
Soundbytes
Q: What first drew you to the heads of house role?
Radovitzky: I had been aware of the role since I arrived at MIT, and over time, I started to wonder if it might be something we’d consider. When our kids were young, it didn’t seem feasible — we lived in the suburbs, and life there was good. But I always had an innate interest in building stronger connections with the student community.
Later, several colleagues encouraged us to apply. I discussed it with the family. Everyone was excited about it. Our teenagers were thrilled by the idea of living on a college campus. We applied together, submitting a letter as a family explaining why we were so passionate about it. We interviewed at McCormick, Baker, and McGregor. When we were offered McCormick, I’ll admit — I was nervous. I wasn’t sure I’d be the right fit for an all-female residence.
Cardarelli: We would have been nervous no matter where we ended up, but McCormick felt like home. It suited us in ways we didn’t anticipate. Raul, for instance, discovered he had a real rapport with the students, telling goofy jokes, making karaoke playlists, and learning about Taylor Swift and Nicki Minaj.
Radovitzky: It’s true! I never knew I’d become an expert at picking karaoke playlists. But we found our rhythm here, and it’s been deeply rewarding.
Q: What makes the McCormick community special?
Radovitzky: McCormick has a unique spirit. I can step out of our apartment and be greeted by 10 smiling faces. That energy is contagious. It’s not just about events or programming — it’s about building trust. We’ve built traditions around that, like our “make your own pizza” nights in our apartment, a wonderful McCormick event we inherited from our predecessors. We host four sessions each spring in which students roll out dough, choose toppings, and we chat as we cook and eat together. Everyone remembers the pizza nights — they’re mentioned in every testimonial.
Cardarelli: We’ve been lucky to have amazing graduate resident assistants and area directors every year. They’re essential partners in building community. They play a key role in creating community and supporting the students on their floors. They help with everything — from tutoring to events to walking students to urgent care if needed.
Radovitzky: In the fall, we take our residents to Crane Beach and host a welcome brunch. Karaoke in our apartment is a big hit too, and a unique way to make them comfortable coming to our apartment from day one. We do it three times a year — during orientation, and again each semester.
Cardarelli: We also host monthly barbecues open to all dorms and run McFast, our first-year tutoring program. Raul started by tutoring physics and math, four hours a week. Now, upperclass students lead most of the sessions. It’s great for both academic support and social connection.
Radovitzky: We also have an Independent Activities Period pasta night tradition. We cook for around 100 students, using four sauces that Flavia makes from scratch — bolognese, creamy mushroom, marinara, and pesto. Students love it.
Q: What’s unique about working in an all-female residence hall?
Cardarelli: I’ve helped students hem dresses, bake, and even apply makeup. It’s like having hundreds of daughters.
Radovitzky: The students here are incredibly mature and engaged. They show real interest in us as people. Many of the activities and connections we’ve built wouldn’t be possible in a different setting. Every year during “de-stress night,” I get my nails painted every color and have a face mask on. During “Are You Smarter Than an MIT Professor,” they dunk me in a water tank.
While previous research shows outrage and division drive engagement on social media, a new study of digital behaviour during the 2024 US election finds that this effect flips during a major crisis – when ‘ingroup solidarity’ becomes the engine of online virality.
Psychologists say the findings show positive emotions such as unity can cut through the hostility on social media, but it takes a shock to the system that threatens a community.
In a little over a week during the summer of 2024, the attempted assassination of Donald Trump at a rally (13 July) and Joe Biden’s suspension of his re-election campaign (21 July) completely reshaped the presidential race.
The University of Cambridge’s Social Decision-Making Lab collected over 62,000 public posts from the Facebook accounts of hundreds of US politicians, commentators and media outlets before and after these events to see how they affected online behaviour.*
“We wanted to understand the kinds of content that went viral among Republicans and Democrats during this period of high tension for both groups,” said Malia Marks, PhD candidate in Cambridge’s Department of Psychology and lead author of the study, published in the journal Proceedings of the National Academy of Sciences.
“Negative emotions such as anger and outrage along with hostility towards opposing political groups are usually rocket fuel for social media engagement. You might expect this to go into hyperdrive during times of crisis and external threat.”
“However, we found the opposite. It appears that political crises evoke not so much outgroup hate but rather ingroup love,” said Marks.
Just after the Trump assassination attempt, Republican-aligned posts signalling unity and shared identity received 53% more engagement than those that did not – an increase of 17 percentage points compared to just before the shooting.
These included posts such as evangelist Franklin Graham thanking God that Donald Trump is alive, and Fox News commentator Laura Ingraham posting: ‘Bleeding and unbowed, Trump faces relentless attacks yet stands strong for America. This is why his followers remain passionately loyal.’
At the same time, engagement levels for Republican posts attacking the Democrats saw a decrease of 23 percentage points from just a few days earlier.
After Biden suspended his re-election campaign, Democrat-aligned posts expressing solidarity received 91% more engagement than those that did not – a major increase of 71 percentage points over the period shortly before his withdrawal.
Posts included former US Secretary of Labor Robert Reich calling Biden “one of our most pro-worker presidents”, and former House Speaker Nancy Pelosi posting that Biden’s “legacy of vision, values and leadership make him one of the most consequential Presidents in American history.”
Biden’s withdrawal saw the continuation of a gradual rise in engagement for Democrat posts attacking Republicans – although over the 25 July days covered by the analysis almost a quarter of all conservative posts displayed “outgroup hostility” compared to just 5% of liberal posts.
Research led by the same Cambridge Lab, published in 2021, showed how social media posts criticizing or mocking those on the rival side of an ideological divide typically receive twice as many shares as posts that champion one’s own side.
“Social media platforms such as Twitter and Facebook are increasingly seen as creating toxic information environments that intensify social and political divisions, and there is plenty of research now to support this,” said Yara Kyrychenko, study co-author and PhD candidate in Cambridge’s Social Decision-Making Lab.
“Yet we see that social media can produce a rally-round-the-flag effect at moments of crisis, when the emotional and psychological preference for one’s own group takes over as the dominant driver of online behaviour.”
Last year, the Cambridge team (led by Kyrychenko) published a study of 1.6 million Ukrainian social media posts in the months before and after Russia’s full-scale invasion in February of 2022.
Following the invasion they found a similar spike for ‘ingroup solidarity’ posts, which got 92% more engagement on Facebook and 68% more on Twitter, while posts hostile to Russia received little extra engagement.
Researchers argue that the findings from the latest study are even more surprising, given the gravity of the threat to Ukraine and the nature of its population.
“We didn’t know whether moments of political rather than existential crisis would trigger solidarity in a country as deeply polarised as the United States. But even here, group unity surged when leadership was threatened,” said Dr Jon Roozenbeek, Lecturer in Psychology at Cambridge University and senior author of the study.
“In times of crisis, ingroup love may matter more to us than outgroup hate on social media.”
* The study used 62,118 public posts from 484 Facebook accounts run by US politicians and partisan commentators or media sources from 5-29 July 2024.
Research reveals how political crises cause a shift in the force behind viral online content ‘from outgroup hate to ingroup love’.
It appears that political crises evoke not so much outgroup hate but rather ingroup love
The National University of Singapore (NUS) has received a generous S$3 million pledge from global real estate powerhouse Mapletree Investments (Mapletree) to strengthen service-learning courses that will empower over 4,000 student volunteers annually to uplift more than 60,000 beneficiaries. This collaboration underscores that everyone has a role in building a "we-first" society where the government, community and corporate partners work together to create a more inclusive Singapore.
This milestone moment was graced by Guest-of-Honour Ms Low Yen Ling, Senior Minister of State, Ministry of Culture, Community and Youth & Ministry of Trade and Industry, on 25 August 2025 at Mapletree Business City.
Under the Communities and Engagement (C&E) pillar of the General Education curriculum, NUS undergraduate students across various disciplines can read service-learning courses as part of their graduation requirements. These courses encourage deep reflection and constructive actions on societal needs and real-world issues such as inequality and poverty which underprivileged and disadvantaged communities struggle with.
As the Principal Founding Donor and one of the largest donors towards the C&E Pillar with a focus on Seniors and Vulnerable Families, Mapletree’s contribution will sustain and expand NUS C&E courses such as GEN2060 Reconnect SeniorsSG, GEN2061 Support Healthy AgeingSG, GEN2062 Community Activities for Seniors with SG Cares, and GEN2070 Community Link (ComLink) Befrienders. Since the pilot launch of the C&E Pillar (Seniors and Vulnerable Families) in Academic Year 2022/2023, over 5,000 students have completed, or are currently enrolled in, these courses.
These service-learning courses run up to a year, encouraging students to take initiative in community service while developing critical thinking about complex social challenges.
Beyond volunteering, students also reflect, analyse and create solutions. The impact of this approach is multifaceted – beneficiaries find companionship and renewed hope; community partners gain extra help on the ground; students cultivate life-long values, and the ripple effects strengthen Singapore’s social fabric.
NUS President Professor Tan Eng Chye said, “We are grateful to Mapletree for this generous contribution, which will greatly enhance the impact of our service-learning courses. By empowering our students to serve the community, we are nurturing among the next generation empathy and a deeper awareness of societal needs among the disadvantaged and underprivileged. At the same time, we create opportunities for our students to do their part and support Singapore’s ageing population and lower-income families. These efforts reinforce our commitment to make a positive impact on society and community through our mission in education.”
Mr Edmund Cheng, Chairman of Mapletree, said, “Mapletree’s latest Corporate Social Responsibility (CSR) initiative with NUS aligns with two of our four CSR pillars – Healthcare and Education. Through our gift, as part of our US$10 million commitment to Temasek Trust’s Philanthropy Asia Alliance (PAA), these courses create beautiful bridges to facilitate relationships between students, seniors and vulnerable families, enriching the lives of all involved. We will continue to invest in the communities where we operate, strengthening the social fabric in meaningful ways.”
In 2023, Temasek Trust announced the launch of PAA to drive positive impact across Asia and mobilise collective philanthropic partnerships and strategies addressing global environmental and social challenges.
With Mapletree’s support, students will deepen their role as volunteers to implement hands-on initiatives to engage seniors and disadvantaged families. For example, as part of GEN2062, students facilitate thoughtfully designed activities at Active Ageing Centres and Senior Care Centres that stimulate cognitive function, enhance physical health, and improve the holistic well-being of elderly participants. Students in GEN2060 and GEN2070 conduct home visits to befriend seniors and vulnerable families, while students in GEN2061 share vital information about government assistance schemes with seniors through door-to-door visits. A small part of the gift will also go towards empowering student initiatives in other C&E courses to support these sectors.
Ms Cheryl Lim, Manager of Programmes at NTUC Health Senior Day Care, said, “We are heartened to witness the smiles on the seniors’ faces, made possible by the diverse range of activities organised by NUS students. The contributions of the students have been truly invaluable—their initiative in planning and leading these activities has not only enriched the lives of our seniors but also provided our team with interesting ideas. We truly appreciate the long-term, sustained collaboration and partnership with NUS that helps foster intergenerational bonds and nurture a continued sense of belonging, especially in our seniors, and within the broader community across generations.”
Complementing the efforts by community partners to promote better health and social engagement among seniors, the courses under the C&E pillar also enable NUS students to experience personal growth through their interaction and involvement with the seniors.
Mr Sean Ang Teng Han, a second-year student from the NUS Faculty of Science, who is currently reading the GEN2062 course, said, “It challenged me to step up and develop soft skills I did not have in the past, such as managing group dynamics, holding the attention of a crowd, and adapting quickly to different personalities. Learning how to engage and entertain a group of elderly participants has taught me a lot about communication and leadership in a more communal setting.”
Mapletree’s gift is the latest in a long-standing partnership with NUS that began over a decade ago with the establishment of the Mapletree Bursary in 2012. With a total endowment of S$900,000 to date, the Bursary has provided over 130 students with financial support, removing barriers that may otherwise pose challenges to their education journey. With this latest gift, Mapletree hopes to promote intergenerational bonding, a volunteering culture among youth, and opportunities to enhance age-in-place initiatives, one of the many efforts to address one of Singapore’s most pressing societal challenges – a rapidly ageing population, with one in four residents reaching the age of 65 by 2030.
Cognitive readiness denotes a person's ability to respond and adapt to the changes around them. This includes functions like keeping balance after tripping, or making the right decision in a challenging situation based on knowledge and past experiences. For military service members, cognitive readiness is crucial for their health and safety, as well as mission success. Injury to the brain is a major contributor to cognitive impairment, and between 2000 and 2024, more than 500,000 military service members were diagnosed with traumatic brain injury (TBI) — caused by anything from a fall during training to blast exposure on the battlefield. While impairment from factors like sleep deprivation can be treated through rest and recovery, others caused by injury may require more intense and prolonged medical attention.
"Current cognitive readiness tests administered to service members lack the sensitivity to detect subtle shifts in cognitive performance that may occur in individuals exposed to operational hazards," says Christopher Smalt, a researcher in the laboratory's Human Health and Performance Systems Group. "Unfortunately, the cumulative effects of these exposures are often not well-documented during military service or after transition to Veterans Affairs, making it challenging to provide effective support."
Smalt is part of a team at the laboratory developing a suite of portable diagnostic tests that provide near-real-time screening for brain injury and cognitive health. One of these tools, called READY, is a smartphone or tablet app that helps identify a potential change in cognitive performance in less than 90 seconds. Another tool, called MINDSCAPE — which is being developed in collaboration with Richard Fletcher, a visiting scientist in the Rapid Prototyping Group who leads the Mobile Technology Lab at the MIT Auto-ID Laboratory, and his students — uses virtual reality (VR) technology for a more in-depth analysis to pinpoint specific conditions such as TBI, post-traumatic stress disorder, or sleep deprivation. Using these tests, medical personnel on the battlefield can make quick and effective decisions for treatment triage.
Both READY and MINDSCAPE are a response to a series of Congressional legislation mandates, military program requirements, and mission-driven health needs to improve brain health screening capabilities for service members.
Cognitive readiness biomarkers
The READY and MINDSCAPE platforms incorporate more than a decade of laboratory research on finding the right indicators of cognitive readiness to build into rapid testing applications. Thomas Quatieri oversaw this work and identified balance, eye movement, and speech as three reliable biomarkers. He is leading the effort at Lincoln Laboratory to develop READY.
"READY stands for Rapid Evaluation of Attention for DutY, and is built on the premise that attention is the key to being 'ready' for a mission," he says. "In one view, we can think of attention as the mental state that allows you to focus on a task."
For someone to be attentive, their brain must continuously anticipate and process incoming sensory information and then instruct the body to respond appropriately. For example, if a friend yells "catch" and then throws a ball in your direction, in order to catch that ball, your brain must process the incoming auditory and visual data, predict in advance what may happen in the next few moments, and then direct your body to respond with an action that synchronizes those sensory data. The result? You realize from hearing the word "catch" and seeing the moving ball that your friend is throwing the ball to you, and you reach out a hand to catch it just in time.
"An unhealthy or fatigued brain — caused by TBI or sleep deprivation, for example — may have challenges within a neurosensory feed-forward [prediction] or feedback [error] system, thus hampering the person's ability to attend," Quatieri says.
READY's three tests measure a person’s ability to track a moving dot with their eye, balance, and hold a vowel fixed at one pitch. The app then uses the data to calculate a variability or "wobble" indicator, which represents changes from the test taker's baseline or from expected results based on others with similar demographics, or the general population. The results are displayed to the user as an indication of the patient's level of attention.
If the READY screen shows an impairment, the administrator can then direct the subject to follow up with MINDSCAPE, which stands for Mobile Interface for Neurological Diagnostic Situational Cognitive Assessment and Psychological Evaluation. MINDSCAPE uses VR technology to administer additional, in-depth tests to measure cognitive functions such as reaction time and working memory. These standard neurocognitive tests are recorded with multimodal physiological sensors, such as electroencephalography (EEG), photoplethysmography, and pupillometry, to better pinpoint diagnosis.
Holistic and adaptable
A key advantage of READY and MINDSCAPE is their ability to leverage existing technologies, allowing for rapid deployment in the field. By utilizing sensors and capabilities already integrated into smartphones, tablets, and VR devices, these assessment tools can be easily adapted for use in operational settings at a significantly reduced cost.
"We can immediately apply our advanced algorithms to the data collected from these devices, without the need for costly and time-consuming hardware development," Smalt says. "By harnessing the capabilities of commercially available technologies, we can quickly provide valuable insights and improve upon traditional assessment methods."
Bringing new capabilities and AI for brain-health sensing into operational environments is a theme across several projects at the laboratory. Another example is EYEBOOM (Electrooculography and Balance Blast Overpressure Monitoring System), a wearable technology developed for the U.S. Special Forces to monitor blast exposure. EYEBOOM continuously monitors a wearer's eye and body movements as they experience blast energy, and warns of potential harm. For this program, the laboratory developed an algorithm that could identify a potential change in physiology resulting from blast exposure during operations, rather than waiting for a check-in.
All three technologies are in development to be versatile, so they can be adapted for other relevant uses. For example, a workflow could pair EYEBOOM's monitoring capabilities with the READY and MINDSCAPE tests: EYEBOOM would continuously monitor for exposure risk and then prompt the wearer to seek additional assessment.
"A lot of times, research focuses on one specific modality, whereas what we do at the laboratory is search for a holistic solution that can be applied for many different purposes," Smalt says.
MINDSCAPE is undergoing testing at the Walter Reed National Military Center this year. READY will be tested with the U.S. Army Research Institute of Environmental Medicine (USARIEM) in 2026 in the context of sleep deprivation. Smalt and Quatieri also see the technologies finding use in civilian settings — on sporting event sidelines, in doctors' offices, or wherever else there is a need to assess brain readiness.
MINDSCAPE is being developed with clinical validation and support from Stefanie Kuchinsky at the Walter Reed National Military Medical Center. Quatieri and his team are developing the READY tests in collaboration with Jun Maruta and Jam Ghajar from the Brain Trauma Foundation (BTF), and Kristin Heaton from USARIEM. The tests are supported by concurrent evidence-based guidelines lead by the BTF and the Military TBI Initiative at Uniform Services University.
Back in the 17th century, German astronomer Johannes Kepler figured out the laws of motion that made it possible to accurately predict where our solar system’s planets would appear in the sky as they orbit the sun. But it wasn’t until decades later, when Isaac Newton formulated the universal laws of gravitation, that the underlying principles were understood. Although they were inspired by Kepler’s laws, they went much further, and made it possible to apply the same formulas to everything from the trajectory of a cannon ball to the way the moon’s pull controls the tides on Earth — or how to launch a satellite from Earth to the surface of the moon or planets.
Today’s sophisticated artificial intelligence systems have gotten very good at making the kind of specific predictions that resemble Kepler’s orbit predictions. But do they know why these predictions work, with the kind of deep understanding that comes from basic principles like Newton’s laws? As the world grows ever-more dependent on these kinds of AI systems, researchers are struggling to try to measure just how they do what they do, and how deep their understanding of the real world actually is.
Now, researchers in MIT’s Laboratory for Information and Decision Systems (LIDS) and at Harvard University have devised a new approach to assessing how deeply these predictive systems understand their subject matter, and whether they can apply knowledge from one domain to a slightly different one. And by and large the answer at this point, in the examples they studied, is — not so much.
The findings were presented at the International Conference on Machine Learning, in Vancouver, British Columbia, last month by Harvard postdoc Keyon Vafa, MIT graduate student in electrical engineering and computer science and LIDS affiliate Peter G. Chang, MIT assistant professor and LIDS principal investigator Ashesh Rambachan, and MIT professor, LIDS principal investigator, and senior author Sendhil Mullainathan.
“Humans all the time have been able to make this transition from good predictions to world models,” says Vafa, the study’s lead author. So the question their team was addressing was, “have foundation models — has AI — been able to make that leap from predictions to world models? And we’re not asking are they capable, or can they, or will they. It’s just, have they done it so far?” he says.
“We know how to test whether an algorithm predicts well. But what we need is a way to test for whether it has understood well,” says Mullainathan, the Peter de Florez Professor with dual appointments in the MIT departments of Economics and Electrical Engineering and Computer Science and the senior author on the study. “Even defining what understanding means was a challenge.”
In the Kepler versus Newton analogy, Vafa says, “they both had models that worked really well on one task, and that worked essentially the same way on that task. What Newton offered was ideas that were able to generalize to new tasks.” That capability, when applied to the predictions made by various AI systems, would entail having it develop a world model so it can “transcend the task that you’re working on and be able to generalize to new kinds of problems and paradigms.”
Another analogy that helps to illustrate the point is the difference between centuries of accumulated knowledge of how to selectively breed crops and animals, versus Gregor Mendel’s insight into the underlying laws of genetic inheritance.
“There is a lot of excitement in the field about using foundation models to not just perform tasks, but to learn something about the world,” for example in the natural sciences, he says. “It would need to adapt, have a world model to adapt to any possible task.”
Are AI systems anywhere near the ability to reach such generalizations? To test the question, the team looked at different examples of predictive AI systems, at different levels of complexity. On the very simplest of examples, the systems succeeded in creating a realistic model of the simulated system, but as the examples got more complex that ability faded fast.
The team developed a new metric, a way of measuring quantitatively how well a system approximates real-world conditions. They call the measurement inductive bias — that is, a tendency or bias toward responses that reflect reality, based on inferences developed from looking at vast amounts of data on specific cases.
The simplest level of examples they looked at was known as a lattice model. In a one-dimensional lattice, something can move only along a line. Vafa compares it to a frog jumping between lily pads in a row. As the frog jumps or sits, it calls out what it’s doing — right, left, or stay. If it reaches the last lily pad in the row, it can only stay or go back. If someone, or an AI system, can just hear the calls, without knowing anything about the number of lily pads, can it figure out the configuration? The answer is yes: Predictive models do well at reconstructing the “world” in such a simple case. But even with lattices, as you increase the number of dimensions, the systems no longer can make that leap.
“For example, in a two-state or three-state lattice, we showed that the model does have a pretty good inductive bias toward the actual state,” says Chang. “But as we increase the number of states, then it starts to have a divergence from real-world models.”
A more complex problem is a system that can play the board game Othello, which involves players alternately placing black or white disks on a grid. The AI models can accurately predict what moves are allowable at a given point, but it turns out they do badly at inferring what the overall arrangement of pieces on the board is, including ones that are currently blocked from play.
The team then looked at five different categories of predictive models actually in use, and again, the more complex the systems involved, the more poorly the predictive modes performed at matching the true underlying world model.
With this new metric of inductive bias, “our hope is to provide a kind of test bed where you can evaluate different models, different training approaches, on problems where we know what the true world model is,” Vafa says. If it performs well on these cases where we already know the underlying reality, then we can have greater faith that its predictions may be useful even in cases “where we don’t really know what the truth is,” he says.
People are already trying to use these kinds of predictive AI systems to aid in scientific discovery, including such things as properties of chemical compounds that have never actually been created, or of potential pharmaceutical compounds, or for predicting the folding behavior and properties of unknown protein molecules. “For the more realistic problems,” Vafa says, “even for something like basic mechanics, we found that there seems to be a long way to go.”
Chang says, “There’s been a lot of hype around foundation models, where people are trying to build domain-specific foundation models — biology-based foundation models, physics-based foundation models, robotics foundation models, foundation models for other types of domains where people have been collecting a ton of data” and training these models to make predictions, “and then hoping that it acquires some knowledge of the domain itself, to be used for other downstream tasks.”
This work shows there’s a long way to go, but it also helps to show a path forward. “Our paper suggests that we can apply our metrics to evaluate how much the representation is learning, so that we can come up with better ways of training foundation models, or at least evaluate the models that we’re training currently,” Chang says. “As an engineering field, once we have a metric for something, people are really, really good at optimizing that metric.”
Researchers at MIT and Harvard University have devised a new approach to assessing how deeply predictive AI systems understand their subject matter, and whether they can apply knowledge from one domain to a slightly different one.
Back in the 17th century, German astronomer Johannes Kepler figured out the laws of motion that made it possible to accurately predict where our solar system’s planets would appear in the sky as they orbit the sun. But it wasn’t until decades later, when Isaac Newton formulated the universal laws of gravitation, that the underlying principles were understood. Although they were inspired by Kepler’s laws, they went much further, and made it possible to apply the same formulas to everything from the trajectory of a cannon ball to the way the moon’s pull controls the tides on Earth — or how to launch a satellite from Earth to the surface of the moon or planets.
Today’s sophisticated artificial intelligence systems have gotten very good at making the kind of specific predictions that resemble Kepler’s orbit predictions. But do they know why these predictions work, with the kind of deep understanding that comes from basic principles like Newton’s laws? As the world grows ever-more dependent on these kinds of AI systems, researchers are struggling to try to measure just how they do what they do, and how deep their understanding of the real world actually is.
Now, researchers in MIT’s Laboratory for Information and Decision Systems (LIDS) and at Harvard University have devised a new approach to assessing how deeply these predictive systems understand their subject matter, and whether they can apply knowledge from one domain to a slightly different one. And by and large the answer at this point, in the examples they studied, is — not so much.
The findings were presented at the International Conference on Machine Learning, in Vancouver, British Columbia, last month by Harvard postdoc Keyon Vafa, MIT graduate student in electrical engineering and computer science and LIDS affiliate Peter G. Chang, MIT assistant professor and LIDS principal investigator Ashesh Rambachan, and MIT professor, LIDS principal investigator, and senior author Sendhil Mullainathan.
“Humans all the time have been able to make this transition from good predictions to world models,” says Vafa, the study’s lead author. So the question their team was addressing was, “have foundation models — has AI — been able to make that leap from predictions to world models? And we’re not asking are they capable, or can they, or will they. It’s just, have they done it so far?” he says.
“We know how to test whether an algorithm predicts well. But what we need is a way to test for whether it has understood well,” says Mullainathan, the Peter de Florez Professor with dual appointments in the MIT departments of Economics and Electrical Engineering and Computer Science and the senior author on the study. “Even defining what understanding means was a challenge.”
In the Kepler versus Newton analogy, Vafa says, “they both had models that worked really well on one task, and that worked essentially the same way on that task. What Newton offered was ideas that were able to generalize to new tasks.” That capability, when applied to the predictions made by various AI systems, would entail having it develop a world model so it can “transcend the task that you’re working on and be able to generalize to new kinds of problems and paradigms.”
Another analogy that helps to illustrate the point is the difference between centuries of accumulated knowledge of how to selectively breed crops and animals, versus Gregor Mendel’s insight into the underlying laws of genetic inheritance.
“There is a lot of excitement in the field about using foundation models to not just perform tasks, but to learn something about the world,” for example in the natural sciences, he says. “It would need to adapt, have a world model to adapt to any possible task.”
Are AI systems anywhere near the ability to reach such generalizations? To test the question, the team looked at different examples of predictive AI systems, at different levels of complexity. On the very simplest of examples, the systems succeeded in creating a realistic model of the simulated system, but as the examples got more complex that ability faded fast.
The team developed a new metric, a way of measuring quantitatively how well a system approximates real-world conditions. They call the measurement inductive bias — that is, a tendency or bias toward responses that reflect reality, based on inferences developed from looking at vast amounts of data on specific cases.
The simplest level of examples they looked at was known as a lattice model. In a one-dimensional lattice, something can move only along a line. Vafa compares it to a frog jumping between lily pads in a row. As the frog jumps or sits, it calls out what it’s doing — right, left, or stay. If it reaches the last lily pad in the row, it can only stay or go back. If someone, or an AI system, can just hear the calls, without knowing anything about the number of lily pads, can it figure out the configuration? The answer is yes: Predictive models do well at reconstructing the “world” in such a simple case. But even with lattices, as you increase the number of dimensions, the systems no longer can make that leap.
“For example, in a two-state or three-state lattice, we showed that the model does have a pretty good inductive bias toward the actual state,” says Chang. “But as we increase the number of states, then it starts to have a divergence from real-world models.”
A more complex problem is a system that can play the board game Othello, which involves players alternately placing black or white disks on a grid. The AI models can accurately predict what moves are allowable at a given point, but it turns out they do badly at inferring what the overall arrangement of pieces on the board is, including ones that are currently blocked from play.
The team then looked at five different categories of predictive models actually in use, and again, the more complex the systems involved, the more poorly the predictive modes performed at matching the true underlying world model.
With this new metric of inductive bias, “our hope is to provide a kind of test bed where you can evaluate different models, different training approaches, on problems where we know what the true world model is,” Vafa says. If it performs well on these cases where we already know the underlying reality, then we can have greater faith that its predictions may be useful even in cases “where we don’t really know what the truth is,” he says.
People are already trying to use these kinds of predictive AI systems to aid in scientific discovery, including such things as properties of chemical compounds that have never actually been created, or of potential pharmaceutical compounds, or for predicting the folding behavior and properties of unknown protein molecules. “For the more realistic problems,” Vafa says, “even for something like basic mechanics, we found that there seems to be a long way to go.”
Chang says, “There’s been a lot of hype around foundation models, where people are trying to build domain-specific foundation models — biology-based foundation models, physics-based foundation models, robotics foundation models, foundation models for other types of domains where people have been collecting a ton of data” and training these models to make predictions, “and then hoping that it acquires some knowledge of the domain itself, to be used for other downstream tasks.”
This work shows there’s a long way to go, but it also helps to show a path forward. “Our paper suggests that we can apply our metrics to evaluate how much the representation is learning, so that we can come up with better ways of training foundation models, or at least evaluate the models that we’re training currently,” Chang says. “As an engineering field, once we have a metric for something, people are really, really good at optimizing that metric.”
Researchers at MIT and Harvard University have devised a new approach to assessing how deeply predictive AI systems understand their subject matter, and whether they can apply knowledge from one domain to a slightly different one.
Mediterranean diet offsets genetic risk for dementia, study finds
Greatest benefit for those with highest predisposition to Alzheimer’s disease
Mass General Brigham Communications
4 min read
New research suggests that following a Mediterranean-style diet may help offset a person’s genetic risk for developing Alzheimer’s disease.
The study, published in Nature Medicine and led by investigators from Mass General Brigham, Harvard T.H. Chan School of Public Health, and the Broad Institute of MIT and Harvard, found that people at the highest genetic risk for Alzheimer’s disease who followed a Mediterranean diet — rich in vegetables, fruits, nuts, whole grains, and low in red and processed meats — showed slower cognitive decline as well as a greater reduction in dementia risk than those at lower genetic risk.
“One reason we wanted to study the Mediterranean diet is because it is the only dietary pattern that has been causally linked to cognitive benefits in a randomized trial,” said study first author Yuxi Liu, a research fellow in the Department of Medicine at Brigham and Women’s Hospital and a postdoctoral fellow at the Harvard Chan School and the Broad. “We wanted to see whether this benefit might be different in people with varying genetic backgrounds, and to examine the role of blood metabolites, the small molecules that reflect how the body processes food and carries out normal functions.”
“These findings suggest that dietary strategies could help reduce the risk of cognitive decline and stave off dementia by broadly influencing key metabolic pathways.”
Yuxi Liu, study’s first author
Over the last few decades, researchers have learned more about the genetic and metabolic basis of Alzheimer’s disease and related dementias. These are among the most common causes of cognitive decline in older adults. Alzheimer’s disease is known to have a strong genetic component, with heritability estimated at up to 80 percent.
One gene in particular, apolipoprotein E, or APOE, has emerged as the strongest genetic risk factor for sporadic Alzheimer’s disease — the more common type develops later in life and is not directly inherited in a predictable pattern. People who carry one copy of the APOE4 variant have a three- to fourfold higher risk of developing Alzheimer’s. People with two copies of the APOE4 variant have a 12-fold higher risk of Alzheimer’s than those without.
To explore how the Mediterranean diet may reduce dementia risk and influence blood metabolites linked to cognitive health, the team analyzed data from 4,215 women in the Nurses’ Health Study, following participants from 1989 to 2023 (average age 57 at baseline). To validate their findings, the researchers analyzed similar data from 1,490 men in the Health Professionals Follow-Up Study, followed from 1993 to 2023.
Researchers evaluated long-term dietary patterns using food frequency questionnaires and examined participants’ blood samples for a broad range of metabolites. Genetic data were used to assess each participant’s inherited risk for Alzheimer’s disease. Participants were then followed over time for new cases of dementia. A subset of 1,037 women underwent regular telephone-based cognitive testing.
They found that the people following a more Mediterranean-style diet had a lower risk of developing dementia and showed slower cognitive decline. The protective effect of the diet was strongest in the high-risk group with two copies of the APOE4 gene variant, suggesting that diet may help offset genetic risk.
“These findings suggest that dietary strategies, specifically the Mediterranean diet, could help reduce the risk of cognitive decline and stave off dementia by broadly influencing key metabolic pathways,” Liu said. “This recommendation applies broadly, but it may be even more important for individuals at a higher genetic risk, such as those carrying two copies of the APOE4 genetic variant.”
A study limitation was that the cohort consisted of well-educated individuals of European ancestry. More research is needed in diverse populations.
In addition, although the study reveals important associations, genetics and metabolomics are not yet part of most clinical risk prediction models for Alzheimer’s disease. People often don’t know their APOE genetics. More work is needed to translate these findings into routine medical practice.
“In future research, we hope to explore whether targeting specific metabolites through diet or other interventions could provide a more personalized approach to reducing dementia risk,” Liu said.
This study was funded in part by the National Institutes of Health.
Mako, co-founded by assistant professor Mohamed Abdelfattah, sets out to tackle one of artificial intelligence’s most pressing infrastructure challenges: optimizing the computing efficiency of graphics processing units.
Human brain organoid showing the integration of excitatory (magenta) and inhibitory neurons (green) of the cerebral cortex.
Credit: Arlotta Lab
Kermit Pattison
Harvard Staff Writer
9 min read
Brain Science grants promote new approaches to treat the condition and discover underlying causes
Paola Arlotta holds up a vial of clear fluid swirling with tiny orbs. When she shakes her wrist, the shapes flutter like the contents of a snow globe.
“Those small spheres swirling around are actually tiny pieces of human cerebral cortex,” said Arlotta, the Golub Family Professor of Stem Cell and Regenerative Biology, “except instead of coming from the brain of a person, they were made in the lab.”
Those minuscule shapes may represent a giant opportunity for breakthroughs into bipolar disorder, a mental health condition that affects about 8 million people in the U.S. These lab-grown “organoids” — brain-like tissue engineered from blood cells of living patients — offer a means to discover more effective drugs and develop more personalized treatments for bipolar patients.
Paola Arlotta.
Harvard file photo
The research effort is just one example of the diverse array of projects funded by the Bipolar Disorder Seed Grant Program of the Harvard Brain Science Initiative, a collaboration between the Faculty of Arts and Sciences (FAS) and Harvard Medical School (HMS). Over the last decade, the program has funded more than 90 projects across the University and affiliated hospitals and hosted five symposia. In some cases, the grants have enabled researchers to develop innovative approaches that subsequently won larger grants from major funding agencies and to publish their findings in prominent journals such as Nature.
“The goal for this grant program has always been to help creative scientists in our community initiate new avenues of research related to bipolar disorder,” said Venkatesh Murthy, co-director of the Harvard Brain Science Initiative and Raymond Leo Erikson Life Sciences Professor of Molecular & Cellular Biology. “New directions, as well as new thinkers, are vital for understanding and eventually curing this damaging disorder.”
The program began in 2015 with the first of a series of gifts from the Dauten Family Foundation and recently expanded thanks to a new gift from Sandra Lee Chen ’85 and Sidney Chen. Kent Dauten, M.B.A. ’79, and his wife, Liz, took up the cause after two of their four children were diagnosed with bipolar disorder despite no known family history of the illness. “The field is terribly underfunded and for too long was a discouraging corner of science because of the complexity of these brain disorders, but in recent years has become an exciting frontier for discovery,” said Kent Dauten. The Chens had similar motivations. “Bipolar disorder has touched our family,” said Sandra Chen. “Our experiences drive our commitment to help advance understanding of what causes this disruptive disorder.”
The program now provides each project with $174,000 spread over two years. The 11 projects funded this year will investigate bipolar disorder causes and treatments from perspectives including genetics, brain circuitry, sleep, immune dysregulation, stress hormones, and gut bacteria.
The seed grants seek to nurture “outside-the-box ideas,” Murthy said. He added, “Many of our grantees have made significant discoveries with this support.”
An unsolved problem
Bipolar disorder usually begins in adolescence and on average patients suffer from symptoms for nine years before they are diagnosed. It brings recurrent episodes of mania and depression — most often the latter.
The typical treatment involves mood stabilizer medications such as lithium. Some patients also are prescribed antipsychotic medications, but these can cause weight gain.
The disorder often brings other health challenges such as cardiovascular diseases, Type 2 diabetes, metabolic syndrome, and obesity. Patients have a life expectancy 12 to 14 years lower than average, and high rates of suicide.
The causes of bipolar remain unknown, but the disorder appears to arise from a complex mix of genetic, epigenetic, neurochemical, and environmental factors.
Basic science: When brain signaling goes awry
Extreme mood swings are a hallmark of bipolar disorder. Patients often veer between manic episodes (characterized by grandiosity, risky behaviors, compulsive talking, distractibility, and reduced need for sleep) to depressive periods (sullen moods, joylessness, weight changes, fatigue, inability to concentrate, indecisiveness, and suicidal thoughts).
Nao Uchida, a professor of molecular and cellular biology, suspects that one driver of this volatility is dopamine, a neurotransmitter that plays a key role in learning, memory, movement, motivation, mood, and attention.
Uchida studies the role of dopamine in animal learning and decision-making. Dopamine often is described as the brain’s “reward system,” but Uchida suggests it is better understood as an arbiter of predictions and their outcomes. Mood often depends not on the result itself, but instead on how much the outcome differs from expectations — what scientists call the reward prediction error (RPE).
A few years ago, Uchida became interested in how dysregulation of the dopamine system might offer insights into the swings of bipolar disorder.
“We had not done research related to these diseases before, so this seed grant really let me enter the field,” said Uchida.
The funds allowed his lab to test how manipulation of depressive or manic states altered the responses of dopamine neurons in mice. The team incorporated new revelations about how synapses became potentiated or depressed to make certain pathways stronger or weaker. Some of their early findings will soon be published in Nature Communications.
Uchida posits that the disorder may be linked to skewed signaling of the neurotransmitters involved in prediction and learning. When the dopamine baseline is high, the person may become biased to learn from positive outcomes and fail to heed negative ones — and thus become prone to taking dangerous risks or entering manic states. In contrast, when the dopamine baseline is low, people pay too much attention to negative outcomes and ignore positive ones — and this pessimism pushes them toward depression.
“A lot of our future predictions depend on our experiences,” said Uchida. “I think that process might be altered in various diseases, including depression, addiction, and bipolar disorders.”
Nao Uchida (left) and Louisa Sylvia.
Harvard file photo; courtesy photo
Clinical research: Reducing obesity
Louisa Sylvia got an intimate glimpse of bipolar disorder in her first job after college. Working as a clinical research coordinator in a bipolar clinic, she witnessed patients struggling with anxiety, depression, and other symptoms. Again and again, she saw patients gain weight after being prescribed medications.
“I quickly became disappointed by the options that were out there for individuals with bipolar,” recalled Sylvia, now an associate professor in the Department of Psychiatry at Mass General Hospital and HMS. “It was really just medications — medications that can have really bad side effects.”
Sylvia has devoted her career to finding better options. (She also is the author of “The Wellness Workbook for Bipolar Disorder: Your Guide to Getting Healthy and Improving Your Mood.”) Even with the best current medications and psychotherapy, many patients continue to suffer from depression and other side effects. To supplement standard therapies, she has sought to develop interventions involving diet, exercise, and wellness.
One promising strategy is time-restricted eating (TRE). Restricting meals to a limited window — say 8 a.m. to 6 p.m. — can result in weight loss, improved mood and cognition, and better sleep.
With the seed grant, Sylvia plans to conduct a trial to evaluate the effects of TRE on bipolar patients. The study will investigate how the regulation of eating habits affects weight, mood, cognition, quality of life, and sleep patterns. She will work with Leilah Grant, an instructor at HMS and researcher at Brigham and Women’s Hospital who specializes in sleep and circadian physiology.
“For individuals who are depressed or have difficulty with motivation or energy, TRE is actually considered one of the easier lifestyle inventions to adhere to,” said Sylvia, who also is associate director of the Dauten Family Center for Bipolar Treatment Innovation at MGH. “We’re basically just saying, ‘Don’t focus as much on what you eat, but rather when you are eating.’”
The seed grants seek to nurture promising approaches that might not get funded through other channels. Sylvia can attest to the value of this opportunity; she had two TRE grant applications for federal funding rejected.
“I look at it like an innovation grant to try something that’s a little bit different but won’t get funded by the normal channels,” she said.
Translational research: Brain avatars
Despite decades of research, the success rate of drugs for treating bipolar disorder remains frustratingly low. Lithium, the mainstay first-line treatment, fully benefits only about 30 percent of patients — but three-quarters of them also suffer from profound side effects.
Animal models do not always translate to human medicine. Among humans, responses vary greatly; some individuals benefit from drug treatments while others do not.
To address these shortcomings, Arlotta is developing an innovative method to test drugs on brain cells of people with bipolar — without putting the humans themselves at risk.
Her team has spent more than a decade developing human brain organoids. They begin by taking a single sample of blood from a person. Because blood cells carry copies of our DNA, they hold the instruction manuals that guide development from fetus to adult. With a series of biochemical signals, these blood cells are reprogrammed to become stem cells. The team then uses another set of signals to mimic the normal process of cell differentiation to grow human brain cells — except as cell cultures outside the body.
“You can grow thousands and thousands of brain organoids from any one of us,” said Arlotta. “If the blood comes from a patient with a disorder, then every single cell in that organoid carries the genome, and genetic risk, of that patient.”
These “avatars” — each about five millimeters in diameter — contain millions of brain cells and hundreds of different cell types. “That is the only experimental model of our brain that science has today,” she said. “It may not be possible to investigate the brain of a patient with bipolar disorder, but scientists might be able to use their avatars.”
In pilot studies, the Arlotta team created brain organoids from stem cells from two groups of bipolar patients: “lithium responders” who benefit from the drug and “lithium nonresponders” who do not. The researchers will test whether these organoids replicate the differences seen in living patients — and then use them to develop more effective therapeutic drugs.
But Arlotta knows that no single approach represents a panacea. Because bipolar disorder remains so mysterious, the seed grant program is valuable because it promotes many promising lines of research across disciplines.
“The program has the modesty of understanding that we know very little about bipolar disorder,” said Arlotta. “Therefore, we need to have multiple shots on goal.”
A new system developed by Cornell Tech researchers helps users detect when their online accounts have been compromised — without exposing their personal devices to invasive tracking by web services.
A research team from ETH Zurich has taught the four-legged robot ANYmal to play badminton – including precise arm movements, quick reflexes and nimble footwork.
For both research and medical purposes, researchers have spent decades pushing the limits of microscopy to produce ever deeper and sharper images of brain activity, not only in the cortex but also in regions underneath, such as the hippocampus. In a new study, a team of MIT scientists and engineers demonstrates a new microscope system capable of peering exceptionally deep into brain tissues to detect the molecular activity of individual cells by using sound.
“The major advance here is to enable us to image deeper at single-cell resolution,” says neuroscientist Mriganka Sur, a corresponding author along with mechanical engineering professor Peter So and principal research scientist Brian Anthony. Sur is the Paul and Lilah Newton Professor in The Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences at MIT.
In the journal Light: Science and Applications, the team demonstrates that they could detect NAD(P)H, a molecule tightly associated with cell metabolism in general and electrical activity in neurons in particular, all the way through samples such as a 1.1-millimeter “cerebral organoid,” a 3D-mini brain-like tissue generated from human stem cells, and a 0.7-milimeter-thick slice of mouse brain tissue.
In fact, says co-lead author and mechanical engineering postdoc W. David Lee, who conceived the microscope’s innovative design, the system could have peered far deeper, but the test samples weren’t big enough to demonstrate that.
“That’s when we hit the glass on the other side,” he says. “I think we’re pretty confident about going deeper.”
Still, a depth of 1.1 milimeters is more than five times deeper than other microscope technologies can resolve NAD(P)H within dense brain tissue. The new system achieved the depth and sharpness by combining several advanced technologies to precisely and efficiently excite the molecule and then to detect the resulting energy, all without having to add any external labels, either via added chemicals or genetically engineered fluorescence.
Rather than focusing the required NAD(P)H excitation energy on a neuron with near ultraviolet light at its normal peak absorption, the scope accomplishes the excitation by focusing an intense, extremely short burst of light (a quadrillionth of a second long) at three times the normal absorption wavelength. Such “three-photon” excitation penetrates deep into tissue with less scattering by brain tissue because of the longer wavelength of the light (“like fog lamps,” Sur says). Meanwhile, although the excitation produces a weak fluorescent signal of light from NAD(P)H, most of the absorbed energy produces a localized (about 10 microns) thermal expansion within the cell, which produces sound waves that travel relatively easily through tissue compared to the fluorescence emission. A sensitive ultrasound microphone in the microscope detects those waves and, with enough sound data, software turns them into high-resolution images (much like a sonogram does). Imaging created in this way is “three-photon photoacoustic imaging.”
“We merged all these techniques — three-photon, label-free, photoacoustic detection,” says co-lead author Tatsuya Osaki, a research scientist in the Picower Institute in Sur’s lab. “We integrated all these cutting-edge techniques into one process to establish this ‘Multiphoton-In and Acoustic-Out’ platform.”
Lee and Osaki combined with research scientist Xiang Zhang and postdoc Rebecca Zubajlo to lead the study, in which the team demonstrated reliable detection of the sound signal through the samples. So far, the team has produced visual images from the sound at various depths as they refine their signal processing.
In the study, the team also shows simultaneous “third-harmonic generation” imaging, which comes from the three-photon stimulation and finely renders cellular structures, alongside their photoacoustic imaging, which detects NAD(P)H. They also note that their photoacoustic method could detect other molecules such as the genetically encoded calcium indicator GCaMP, that neuroscientists use to signal neural electrical activity.
With the concept of label-free, multiphoton, photoacoustic microscopy (LF-MP-PAM) established in the paper, the team is now looking ahead to neuroscience and clinical applications.
For instance, through the company Precision Healing, Inc., which he founded and sold, Lee has already established that NAD(P)H imaging can inform wound care. In the brain, levels of the molecule are known to vary in conditions such as Alzheimer’s disease, Rett syndrome, and seizures, making it a potentially valuable biomarker. Because the new system is label-free (i.e., no added chemicals or altered genes), it could be used in humans, for instance, during brain surgeries.
The next step for the team is to demonstrate it in a living animal, rather than just in in vitro and ex-vivo tissues. The technical challenge there is that the microphone can no longer be on the opposite side of the sample from the light source (as it was in the current study). It has to be on top, just like the light source.
Lee says he expects that full imaging at depths of 2 milimeters in live brains is entirely feasible, given the results in the new study.
“In principle, it should work,” he says.
Mercedes Balcells and Elazer Edelman are also authors of the paper. Funding for the research came from sources including the National Institutes of Health, the Simon Center for the Social Brain, the lab of Peter So, The Picower Institute for Learning and Memory, and the Freedom Together Foundation.
Researchers have developed a new microscope system to finely image molecules deep in live brain tissues, an advance that could boost neuroscience and clinical research.
The MIT Sailing Pavilion hosted an altogether different marine vessel recently: a prototype of a solar electric boat developed by James Worden ’89, the founder of the MIT Solar Electric Vehicle Team (SEVT). Worden visited the pavilion on a sizzling, sunny day in late July to offer students from the SEVT, the MIT Edgerton Center, MIT Sea Grant, and the broader community an inside look at the Anita, named for his late wife.
Worden’s fascination with solar power began at age 10, when he picked up a solar chip at a “hippy-like” conference in his hometown of Arlington, Massachusetts. “My eyes just lit up,” he says. He built his first solar electric vehicle in high school, fashioned out of cardboard and wood (taking first place at the 1984 Massachusetts Science Fair), and continued his journey at MIT, founding SEVT in 1986. It was through SEVT that he met his wife and lifelong business partner, Anita Rajan Worden ’90. Together, they founded two companies in the solar electric and hybrid vehicles space, and in 2022 launched a solar electric boat company.
On the Charles River, Worden took visitors for short rides on Anita, including a group of current SEVT students who peppered him with questions. The 20-foot pontoon boat, just 12 feet wide and 7 feet tall, is made of carbon fiber composites, single crystalline solar photovoltaic cells, and lithium iron phosphate battery cells. Ultimately, Worden envisions the prototype could have applications as mini-ferry boats and water taxis.
With warmth and humor, he drew parallels between the boat’s components and mechanics and those of the solar cars the students are building. “It’s fun! If you think about all the stuff you guys are doing, it’s all the same stuff,” he told them, “optimizing all the different systems and making them work.” He also explained the design considerations unique to boating applications, like refining the hull shape for efficiency and maneuverability in variable water and wind conditions, and the critical importance of protecting wiring and controls from open water and condensate.
“Seeing Anita in all its glory was super cool,” says Nicole Lin, vice captain of SEVT. “When I first saw it, I could immediately map the different parts of the solar car to its marine counterparts, which was astonishing to see how far I’ve come as an engineer with SEVT. James also explained the boat using solar car terms, as he drew on his experience with solar cars for his solar boats. It blew my mind to see the engineering we learned with SEVT in action.”
Over the years, the Wordens have been avid supporters of SEVT and the Edgerton Center, so the visit was, in part, a way to pay it forward to MIT. “There’s a lot of connections,” he says. He’s still awed by the fact that Harold “Doc” Edgerton, upon learning about his interest in building solar cars, carved out a lab space for him to use in Building 20 — as a first-year student. And a few years ago, as Worden became interested in marine vessels, he tapped Sea Grant Education Administrator Drew Bennett for a 90-minute whiteboard lecture, “MIT fire-hose style,” on hydrodynamics. “It was awesome!” he says.
A group of visitors sets off from the dock for a cruise around the Charles River. The Anita weighs about 2,800 pounds and can accommodate six passengers at a time.
For both research and medical purposes, researchers have spent decades pushing the limits of microscopy to produce ever deeper and sharper images of brain activity, not only in the cortex but also in regions underneath, such as the hippocampus. In a new study, a team of MIT scientists and engineers demonstrates a new microscope system capable of peering exceptionally deep into brain tissues to detect the molecular activity of individual cells by using sound.
“The major advance here is to enable us to image deeper at single-cell resolution,” says neuroscientist Mriganka Sur, a corresponding author along with mechanical engineering professor Peter So and principal research scientist Brian Anthony. Sur is the Paul and Lilah Newton Professor in The Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences at MIT.
In the journal Light: Science and Applications, the team demonstrates that they could detect NAD(P)H, a molecule tightly associated with cell metabolism in general and electrical activity in neurons in particular, all the way through samples such as a 1.1-millimeter “cerebral organoid,” a 3D-mini brain-like tissue generated from human stem cells, and a 0.7-milimeter-thick slice of mouse brain tissue.
In fact, says co-lead author and mechanical engineering postdoc W. David Lee, who conceived the microscope’s innovative design, the system could have peered far deeper, but the test samples weren’t big enough to demonstrate that.
“That’s when we hit the glass on the other side,” he says. “I think we’re pretty confident about going deeper.”
Still, a depth of 1.1 milimeters is more than five times deeper than other microscope technologies can resolve NAD(P)H within dense brain tissue. The new system achieved the depth and sharpness by combining several advanced technologies to precisely and efficiently excite the molecule and then to detect the resulting energy, all without having to add any external labels, either via added chemicals or genetically engineered fluorescence.
Rather than focusing the required NAD(P)H excitation energy on a neuron with near ultraviolet light at its normal peak absorption, the scope accomplishes the excitation by focusing an intense, extremely short burst of light (a quadrillionth of a second long) at three times the normal absorption wavelength. Such “three-photon” excitation penetrates deep into tissue with less scattering by brain tissue because of the longer wavelength of the light (“like fog lamps,” Sur says). Meanwhile, although the excitation produces a weak fluorescent signal of light from NAD(P)H, most of the absorbed energy produces a localized (about 10 microns) thermal expansion within the cell, which produces sound waves that travel relatively easily through tissue compared to the fluorescence emission. A sensitive ultrasound microphone in the microscope detects those waves and, with enough sound data, software turns them into high-resolution images (much like a sonogram does). Imaging created in this way is “three-photon photoacoustic imaging.”
“We merged all these techniques — three-photon, label-free, photoacoustic detection,” says co-lead author Tatsuya Osaki, a research scientist in the Picower Institute in Sur’s lab. “We integrated all these cutting-edge techniques into one process to establish this ‘Multiphoton-In and Acoustic-Out’ platform.”
Lee and Osaki combined with research scientist Xiang Zhang and postdoc Rebecca Zubajlo to lead the study, in which the team demonstrated reliable detection of the sound signal through the samples. So far, the team has produced visual images from the sound at various depths as they refine their signal processing.
In the study, the team also shows simultaneous “third-harmonic generation” imaging, which comes from the three-photon stimulation and finely renders cellular structures, alongside their photoacoustic imaging, which detects NAD(P)H. They also note that their photoacoustic method could detect other molecules such as the genetically encoded calcium indicator GCaMP, that neuroscientists use to signal neural electrical activity.
With the concept of label-free, multiphoton, photoacoustic microscopy (LF-MP-PAM) established in the paper, the team is now looking ahead to neuroscience and clinical applications.
For instance, through the company Precision Healing, Inc., which he founded and sold, Lee has already established that NAD(P)H imaging can inform wound care. In the brain, levels of the molecule are known to vary in conditions such as Alzheimer’s disease, Rett syndrome, and seizures, making it a potentially valuable biomarker. Because the new system is label-free (i.e., no added chemicals or altered genes), it could be used in humans, for instance, during brain surgeries.
The next step for the team is to demonstrate it in a living animal, rather than just in in vitro and ex-vivo tissues. The technical challenge there is that the microphone can no longer be on the opposite side of the sample from the light source (as it was in the current study). It has to be on top, just like the light source.
Lee says he expects that full imaging at depths of 2 milimeters in live brains is entirely feasible, given the results in the new study.
“In principle, it should work,” he says.
Mercedes Balcells and Elazer Edelman are also authors of the paper. Funding for the research came from sources including the National Institutes of Health, the Simon Center for the Social Brain, the lab of Peter So, The Picower Institute for Learning and Memory, and the Freedom Together Foundation.
Researchers have developed a new microscope system to finely image molecules deep in live brain tissues, an advance that could boost neuroscience and clinical research.
A fast radio burst is an immense flash of radio emission that lasts for just a few milliseconds, during which it can momentarily outshine every other radio source in its galaxy. These flares can be so bright that their light can be seen from halfway across the universe, several billion light years away.
The sources of these brief and dazzling signals are unknown. But scientists now have a chance to study a fast radio burst (FRB) in unprecedented detail. An international team of scientists including physicists at MIT have detected a near and ultrabright fast radio burst some 130 million light-years from Earth in the constellation Ursa Major. It is one of the closest FRBs detected to date. It is also the brightest — so bright that the signal has garnered the informal moniker, RBFLOAT, for “radio brightest flash of all time.”
The burst’s brightness, paired with its proximity, is giving scientists the closest look yet at FRBs and the environments from which they emerge.
“Cosmically speaking, this fast radio burst is just in our neighborhood,” says Kiyoshi Masui, associate professor of physics and affiliate of MIT’s Kavli Institute for Astrophysics and Space Research. “This means we get this chance to study a pretty normal FRB in exquisite detail.”
The clarity of the new detection is thanks to a significant upgrade to The Canadian Hydrogen Intensity Mapping Experiment (CHIME), a large array of halfpipe-shaped antennae based in British Columbia. CHIME was originally designed to detect and map the distribution of hydrogen across the universe. The telescope is also sensitive to ultrafast and bright radio emissions. Since it started observations in 2018, CHIME has detected about 4,000 fast radio bursts, from all parts of the sky. But the telescope had not been able to precisely pinpoint the location of each fast radio burst, until now.
CHIME recently got a significant boost in precision, in the form of CHIME Outriggers — three miniature versions of CHIME, each sited in different parts of North America. Together, the telescopes work as one continent-sized system that can focus in on any bright flash that CHIME detects, to pin down its location in the sky with extreme precision.
“Imagine we are in New York and there’s a firefly in Florida that is bright for a thousandth of a second, which is usually how quick FRBs are,” says MIT Kavli graduate student Shion Andrew. “Localizing an FRB to a specific part of its host galaxy is analogous to figuring out not just what tree the firefly came from, but which branch it’s sitting on.”
The new fast radio burst is the first detection made using the combination of CHIME and the completed CHIME Outriggers. Together, the telescope array identified the FRB and determined not only the specific galaxy, but also the region of the galaxy from where the burst originated. It appears that the burst arose from the edge of the galaxy, just outside of a star-forming region. The precise localization of the FRB is allowing scientists to study the environment around the signal for clues to what brews up such bursts.
“As we’re getting these much more precise looks at FRBs, we’re better able to see the diversity of environments they’re coming from,” says MIT physics postdoc Adam Lanman.
Lanman, Andrew, and Masui are members of the CHIME Collaboration — which includes scientists from multiple institutions around the world — and are authors of the new paper detailing the discovery of the new FRB detection.
An older edge
Each of CHIME’s Outrigger stations continuously monitors the same swath of sky as the parent CHIME array. Both CHIME and the Outriggers “listen” for radio flashes, at incredibly short, millisecond timescales. Even over several minutes, such precision monitoring can amount to a huge amount of data. If CHIME detects no FRB signal, the Outriggers automatically delete the last 40 seconds of data to make room for the next span of measurements.
On March 16, 2025, CHIME detected an ultrabright flash of radio emissions, which automatically triggered the CHIME Outriggers to record the data. Initially, the flash was so bright that astronomers were unsure whether it was an FRB or simply a terrestrial event caused, for instance, by a burst of cellular communications.
That notion was put to rest as the CHIME Outrigger telescopes focused in on the flash and pinned down its location to NGC4141 — a spiral galaxy in the constellation Ursa Major about 130 million light years away, which happens to be surprisingly close to our own Milky Way. The detection is one of the closest and brightest fast radio bursts detected to date.
Follow-up observations in the same region revealed that the burst came from the very edge of an active region of star formation. While it’s still a mystery as to what source could produce FRBs, scientists’ leading hypothesis points to magnetars — young neutron stars with extremely powerful magnetic fields that can spin out high-energy flares across the electromagnetic spectrum, including in the radio band. Physicists suspect that magnetars are found in the center of star formation regions, where the youngest, most active stars are forged. The location of the new FRB, just outside a star-forming region in its galaxy, may suggest that the source of the burst is a slightly older magnetar.
“These are mostly hints,” Masui says. “But the precise localization of this burst is letting us dive into the details of how old an FRB source could be. If it were right in the middle, it would only be thousands of years old — very young for a star. This one, being on the edge, may have had a little more time to bake.”
No repeats
In addition to pinpointing where the new FRB was in the sky, the scientists also looked back through CHIME data to see whether any similar flares occurred in the same region in the past. Since the first FRB was discovered in 2007, astronomers have detected over 4,000 radio flares. Most of these bursts are one-offs. But a few percent have been observed to repeat, flashing every so often. And an even smaller fraction of these repeaters flash in a pattern, like a rhythmic heartbeat, before flaring out. A central question surrounding fast radio bursts is whether repeaters and nonrepeaters come from different origins.
The scientists looked through CHIME’s six years of data and came up empty: This new FRB appears to be a one-off, at least in the last six years. The findings are particularly exciting, given the burst’s proximity. Because it is so close and so bright, scientists can probe the environment in and around the burst for clues to what might produce a nonrepeating FRB.
“Right now we’re in the middle of this story of whether repeating and nonrepeating FRBs are different. These observations are putting together bits and pieces of the puzzle,” Masui says.
“There’s evidence to suggest that not all FRB progenitors are the same,” Andrew adds. “We’re on track to localize hundreds of FRBs every year. The hope is that a larger sample of FRBs localized to their host environments can help reveal the full diversity of these populations.”
The construction of the CHIME Outriggers was funded by the Gordon and Betty Moore Foundation and the U.S. National Science Foundation. The construction of CHIME was funded by the Canada Foundation for Innovation and provinces of Quebec, Ontario, and British Columbia.
A team of scientists, including physicists at MIT, have detected a near and ultrabright fast radio burst some 130 million light-years from Earth in the constellation Ursa Major.
‘There is literally no other intervention in our field that impacts burnout to this extent’
AI-driven scribes that record patient visits and draft clinical notes for physician review led to significant reductions in physician burnout and improvements in well-being, according to a Mass General Brigham study of two large healthcare systems.
The findings, published in JAMA Network Open, draw on surveys of more than 1,400 physicians and advanced practice providers at both Harvard-affiliated Mass General Brigham and Atlanta’s Emory Healthcare.
At MGB, use of ambient documentation technologies was associated with a 21.2 percent absolute reduction in burnout prevalence at 84 days, while Emory Healthcare saw a 30.7 percent absolute increase in documentation-related well-being at 60 days.
50%Physician burnout linked to maintaining electronic patient files
“Ambient documentation technology has been truly transformative in freeing up physicians from their keyboards to have more face-to-face interaction with their patients,” said study co-senior author Rebecca Mishuris, chief medical information officer at MGB, a faculty member at Harvard Medical School, and a primary care physician in the healthcare system. “Our physicians tell us that they have their nights and weekends back and have rediscovered their joy of practicing medicine. There is literally no other intervention in our field that impacts burnout to this extent.”
Physician burnout affects more than 50 percent of U.S. doctors and has been linked to time spent in electronic health records, particularly after hours. There is additional evidence that the burden and anticipation of needing to complete their appointment notes also contributes significantly to physician burnout.
“Burnout adversely impacts both providers and their patients who face greater risks to their safety and access to care,” said Lisa Rotenstein, a co-senior study author and director of The Center for Physician Experience and Practice Excellence at Brigham and Women’s Hospital. She is also an assistant clinical professor of medicine at the UCSF School of Medicine. “This is an issue that hospitals nationwide are looking to tackle, and ambient documentation provides a scalable technology worth further study.”
“Our physicians tell us that they have their nights and weekends back and have rediscovered their joy of practicing medicine.”
Rebecca Mishuris, Mass General Brigham
Qualitative feedback from users touted that ambient documentation enabled more “contact with patients and families,” improvements in their “joy in practice,” while recognizing its potential to “fundamentally [change] the experience of being a physician.” However, some users felt it added time to their note-writing or had less utility for certain visit types or medical specialties. Since the pilot studies began, the AI technologies have evolved as the vendors make changes based on user feedback and the large language models that power the technologies improve themselves through additional training, warranting continued study.
The researchers analyzed survey data from pilot users of ambient documentation technologies at two large health systems. At Mass General Brigham, 873 physicians and advanced practice providers were given surveys before enrolling, then after 42 and 84 days. About 30 percent of users responded to the surveys at 42 days, and 22 percent at 84 days. All 557 Emory pilot users were surveyed before the pilots and then at 60 days of use, with an 11 percent response rate. Researchers analyzed the survey results quantifying different measures of burnout at Mass General Brigham and physician well-being at Emory Healthcare.
The study authors added that given that these were pilot users and there were limited survey response rates, the findings likely represent the experience of more enthusiastic users, and more research is needed to track clinical use of ambient documentation across a broader group of providers.
Mass General Brigham’s ambient documentation program launched in July 2023 as a proof-of-concept pilot study involving 18 physicians. By July 2024, the pilot, which tested two different ambient documentation technologies, expanded to more than 800 providers. As of April 2025, the technologies have been made available to all Mass General Brigham physicians, with more than 3,000 providers routinely using the tools. Later this year, the program will look to expand to other healthcare professionals such as nurses, physical and occupational therapists, and speech-language pathologists.
“Ambient documentation technology offers a step forward in healthcare and new tools that may positively impact our clinical teams,” said Jacqueline You, lead study author and a digital clinical lead and primary care associate physician at Mass General Brigham. “While stories of providers being able to call more patients or go home and play with their kids without having to worry about notes are powerful, we feel the burnout data speak similar volumes of the promise of these technologies, and importance of continuing to study them.”
Ambient documentation’s use will continue to be studied with surveys and other measures tracking burnout rates and time spent on clinical notes inside and outside of working hours. Researchers will evaluate whether burnout rates improve over time as the AI evolves, or if these burnout gains plateau or are reversed.
This project received financial support from the Physician’s Foundation and the National Library of Medicine of the National Institutes of Health.
Rising global temperatures affect human activity in many ways. Now, a new study illuminates an important dimension of the problem: Very hot days are associated with more negative moods, as shown by a large-scale look at social media postings.
Overall, the study examines 1.2 billion social media posts from 157 countries over the span of a year. The research finds that when the temperature rises above 95 degrees Fahrenheit, or 35 degrees Celsius, expressed sentiments become about 25 percent more negative in lower-income countries and about 8 percent more negative in better-off countries. Extreme heat affects people emotionally, not just physically.
“Our study reveals that rising temperatures don’t just threaten physical health or economic productivity — they also affect how people feel, every day, all over the world,” says Siqi Zheng, a professor in MIT’s Department of Urban Studies and Planning (DUSP) and Center for Real Estate (CRE), and co-author of a new paper detailing the results. “This work opens up a new frontier in understanding how climate stress is shaping human well-being at a planetary scale.”
The paper, “Unequal Impacts of Rising Temperatures on Global Human Sentiment,” is published today in the journal One Earth. The authors are Jianghao Wang, of the Chinese Academy of Sciences; Nicolas Guetta-Jeanrenaud SM ’22, a graduate of MIT’s Technology and Policy Program (TPP) and Institute for Data, Systems, and Society; Juan Palacios, a visiting assistant professor at MIT’s Sustainable Urbanization Lab (SUL) and an assistant professor Maastricht University; Yichun Fan, of SUL and Duke University; Devika Kakkar, of Harvard University; Nick Obradovich, of SUL and the Laureate Institute for Brain Research in Tulsa; and Zheng, who is the STL Champion Professor of Urban and Real Estate Sustainability at CRE and DUSP. Zheng is also the faculty director of CRE and founded the Sustainable Urbanization Lab in 2019.
Social media as a window
To conduct the study, the researchers evaluated 1.2 billion posts from the social media platforms Twitter and Weibo, all of which appeared in 2019. They used a natural language processing technique called Bidirectional Encoder Representations from Transformers (BERT), to analyze 65 languages across the 157 countries in the study.
Each social media post was given a sentiment rating from 0.0 (for very negative posts) to 1.0 (for very positive posts). The posts were then aggregated geographically to 2,988 locations and evaluated in correlation with area weather. From this method, the researchers could then deduce the connection between extreme temperatures and expressed sentiment.
“Social media data provides us with an unprecedented window into human emotions across cultures and continents,” Wang says. “This approach allows us to measure emotional impacts of climate change at a scale that traditional surveys simply cannot achieve, giving us real-time insights into how temperature affects human sentiment worldwide.”
To assess the effects of temperatures on sentiment in higher-income and middle-to-lower-income settings, the scholars also used a World Bank cutoff level of gross national income per-capita annual income of $13,845, finding that in places with incomes below that, the effects of heat on mood were triple those found in economically more robust settings.
“Thanks to the global coverage of our data, we find that people in low- and middle-income countries experience sentiment declines from extreme heat that are three times greater than those in high-income countries,” Fan says. “This underscores the importance of incorporating adaptation into future climate impact projections.”
In the long run
Using long-term global climate models, and expecting some adaptation to heat, the researchers also produced a long-range estimate of the effects of extreme temperatures on sentiment by the year 2100. Extending the current findings to that time frame, they project a 2.3 percent worsening of people’s emotional well-being based on high temperatures alone by then — although that is a far-range projection.
“It's clear now, with our present study adding to findings from prior studies, that weather alters sentiment on a global scale,” Obradovich says. “And as weather and climates change, helping individuals become more resilient to shocks to their emotional states will be an important component of overall societal adaptation.”
The researchers note that there are many nuances to the subject, and room for continued research in this area. For one thing, social media users are not likely to be a perfectly representative portion of the population, with young children and the elderly almost certainly using social media less than other people. However, as the researchers observe in the paper, the very young and elderly are probably particularly vulnerable to heat shocks, making the response to hot weather possible even larger than their study can capture.
The research is part of the Global Sentiment project led by the MIT Sustainable Urbanization Lab, and the study’s dataset is publicly available. Zheng and other co-authors have previously investigated these dynamics using social media, although never before at this scale.
“We hope this resource helps researchers, policymakers, and communities better prepare for a warming world,” Zheng says.
The research was supported, in part, by Zheng’s chaired professorship research fund, and grants Wang received from the National Natural Science Foundation of China and the Chinese Academy of Sciences.
The research introduces the first framework for analyzing how digital authentication tools can be exploited in contexts such as intimate partner violence, elder abuse and human trafficking.
The findings, published today in Nature Neuroscience, have implications for the treatment of ‘phantom limb’ pain, but also suggest that controlling robotic replacement limbs via neural interfaces may be more straightforward than previously thought.
Studies have previously shown that within an area of the brain known as the somatosensory cortex there exists a map of the body, with different regions corresponding to different body parts. These maps are responsible for processing sensory information, such as touch, temperature and pain, as well as body position. For example, if you touch something hot with your hand, this will activate a particular region of the brain; if you stub your toe, a different region activates.
For decades now, the commonly-accepted view among neuroscientists has been that following amputation of a limb, neighbouring regions rearrange and essentially take over the area previously assigned to the now missing limb. This has relied on evidence from studies carried out after amputation, without comparing activity in the brain maps beforehand.
But this has presented a conundrum. Most amputees report phantom sensations, a feeling that the limb is still in place – this can also lead to sensations such as itching or pain in the missing limb. Also, brain imaging studies where amputees have been asked to ‘move’ their missing fingers have shown brain patterns resembling those of able-bodied individuals.
To investigate this contradiction, a team led by Professor Tamar Makin from the University of Cambridge and Dr Hunter Schone from the University of Pittsburgh followed three individuals due to undergo amputation of one of their hands. This is the first time a study has looked at the hand and face maps of individuals both before and after amputation. Most of the work was carried out while Professor Makin and Dr Schone were at UCL.
Prior to amputation, all three individuals were able to move all five digits of their hands. While lying in a functional magnetic resonance imaging (fMRI) scanner – which measures activity in the brain – the participants were asked to move their individual fingers and to purse their lips. The researchers used the brain scans to construct maps of the hand and lips for each individual. In these maps, the lips sit near to the hand.
The participants repeated the activity three months and again six months after amputation, this time asked to purse their lips and to imagine moving individual fingers. One participant was scanned again 18 months after amputation and a second participant five years after amputation.
The researchers examined the signals from the pre-amputation finger maps and compared them against the maps post-amputation. Analysis of the ‘before’ and ‘after’ images revealed a remarkable consistency: even with their hand now missing, the corresponding brain region activated in an almost identical manner.
Professor Makin, from the Medical Research Council Cognition and Brain Science Unit at the University of Cambridge, the study’s senior author, said: “Because of our previous work, we suspected that the brain maps would be largely unchanged, but the extent to which the map of the missing limb remained intact was jaw-dropping.
“Bearing in mind that the somatosensory cortex is responsible for interpreting what’s going on within the body, it seems astonishing that it doesn’t seem to know that the hand is no longer there.”
As previous studies had suggested that the body map reorganises such that neighbouring regions take over, the researchers looked at the region corresponding to the lips to see if it had moved or spread. They found that it remained unchanged and had not taken over the region representing the missing hand.
The study’s first author, Dr Schone from the Department of Physical Medicine and Rehabilitation, University of Pittsburgh, said: “We didn’t see any signs of the reorganisation that is supposed to happen according to the classical way of thinking. The brain maps remained static and unchanged.”
To complement their findings, the researchers compared their case studies to 26 participants who had had upper limbs amputated, on average 23.5 years beforehand. These individuals showed similar brain representations of the hand and lips to those in their three case studies, suggesting long-term evidence for the stability of hand and lip representations despite amputation.
Brain activity maps for the hand (shown in red) and lips (blue) before and after amputation
The researchers offer an explanation for the previous misunderstanding of what happens within the brain following amputation. They say that the boundaries within the brain maps are not clear cut – while the brain does have a map of the body, each part of the map doesn’t support one body part exclusively. So while inputs from the middle finger may largely activate one region, they also show some activity in the region representing the forefinger, for example. Previous studies that argue for massive reorganisation determined the layout of the maps by applying a ‘winner takes all’ strategy – stimulating the remaining body parts and noting which area of the brain shows most activity; because the missing limb is no longer there to be stimulated, activity from neighbouring limbs has been misinterpreted as taking over.
The findings have implications for the treatment of phantom limb pain, a phenomenon that can plague amputees. Current approaches focus on trying to restore representation of the limb in the brain’s map, but randomised controlled trials to test this approach have shown limited success – today’s study suggests this is because these approaches are focused on the wrong problem.
Dr Schone said: “The remaining parts of the nerves — still inside the residual limb — are no longer connected to their end-targets. They are dramatically cut off from the sensory receptors that have delivered them consistent signals. Without an end-target, the nerves can continue to grow to form a thickening of the nerve tissue and send noisy signals back to the brain.
“The most promising therapies involve rethinking how the amputation surgery is actually performed, for instance grafting the nerves into a new muscle or skin, so they have a new home to attach to.”
Of the three participants, one had substantial limb pain prior to amputation but received a complex procedure to graft the nerves to new muscle or skin; she no longer experiences pain. The other two participants, however, received the standard treatment and continue to experience phantom limb pain.
The University of Pittsburgh is one of a number of institutions that is researching whether movement and sensation can be restored to paralysed limbs or whether amputated limbs might be replaced by artificial, robotic limbs controlled by a brain interface. Today’s study suggests that because the brain maps are preserved, it should – in theory – be possible to restore movement to a paralysed limb or for the brain to control a prosthetic.
Dr Chris Baker from the Laboratory of Brain & Cognition, National Institutes of Mental Health, said: “If the brain rewired itself after amputation, these technologies would fail. If the area that had been responsible for controlling your hand was now responsible for your face, these implants just wouldn’t work. Our findings provide a real opportunity to develop these technologies now.”
Dr Schone added: “Now that we’ve shown these maps are stable, brain-computer interface technologies can operate under the assumption that the body map remains consistent over time. This allows us to move into the next frontier: accessing finer details of the hand map — like distinguishing the tip of the finger from the base — and restoring the rich, qualitative aspects of sensation, such as texture, shape, and temperature. This study is a powerful reminder that even after limb loss, the brain holds onto the body, waiting for us to reconnect.”
The research was supported by Wellcome, the National Institute of Mental Health, National Institutes of Health and Medical Research Council.
The brain holds a ‘map’ of the body that remains unchanged even after a limb has been amputated, contrary to the prevailing view that it rearranges itself to compensate for the loss, according to new research from scientists in the UK and US.
We suspected that the brain maps would be largely unchanged, but the extent to which the map of the missing limb remained intact was jaw-dropping
A multiyear program at MIT Lincoln Laboratory to characterize how biological and chemical vapors and aerosols disperse through the New York City subway system is coming to a close. The program, part of the U.S. Department of Homeland Security (DHS) Science and Technology Directorate's Urban Area Security Initiative, builds on other efforts at Lincoln Laboratory to detect chemical and biological threats, validate air dispersion models, and improve emergency protocols in urban areas in case of an airborne attack. The results of this program will inform the New York Metropolitan Transportation Authority (MTA) on how best to install an efficient, cost-effective system for airborne threat detection and mitigation throughout the subway. On a broader scale, the study will help the national security community understand pragmatic chemical and biological defense options for mass transit, critical facilities, and special events.
Trina Vian from the laboratory's Counter–Weapons of Mass Destruction (WMD) Systems Group led this project, which she says had as much to do with air flow and sensors as it did with MTA protocols and NYC commuters. "There are real dangers associated with panic during an alarm. People can get hurt during mass evacuation, or lose trust in a system and the authorities that administer that system, if there are false alarms," she says. "A novel aspect of our project was to investigate effective low-regret response options, meaning those with little operational consequence to responding to a false alarm."
Currently, depending on the severity of the alarm, the MTA's response can include stopping service and evacuating passengers and employees.
A complex environment for testing
For the program, which started in 2019, Vian and her team collected data on how chemical and biological sensors performed in the subway, what factors affected sensor accuracy, and how different mitigation protocols fared in stopping an airborne threat from spreading and removing the threat from a contaminated location. For their tests, they released batches of a safe, custom-developed aerosol simulant within Grand Central Station that they could track with DNA barcodes. Each batch had a different barcode, which allowed the team to differentiate among them and quantitatively assess different combinations of mitigation strategies.
To control and isolate air flow, the team tested static air curtains as well as air filtration systems. They also tested a spray knockdown system developed by Sandia National Laboratories designed to reduce and isolate particulate hazards in large volume areas. The system sprays a fine water mist into the tunnels that attaches to threat particulates and uses gravity to rain out the threat material. The spray contains droplets of a particular size and concentration, and with an applied electrostatic field. The original idea for the system was adapted from the coal mining industry, which used liquid sprayers to reduce the amount of inhalable soot.
The tests were done in a busy environment, and the team was required to complete trainings on MTA protocols such as track safety and how to interact with the public.
"We had long and sometimes very dirty days," says Jason Han of the Counter–WMD Systems Group, who collected measurements in the tunnels and analyzed the data. "We all wore bright orange contractor safety vests, which made people think we were official employees of the MTA. We would often get approached by people asking for directions!"
At times, issues such as power outages or database errors could disrupt data capture.
"We learned fairly early on that we had to capture daily data backups and keep a daily evolving master list of unique sensor identifiers and locations," says fellow team member Cassie Smith. "We developed workflows and wrote scripts to help automate the process, which ensured successful sensor data capture and attribution."
The team also worked closely with the MTA to make sure their tests and data capture ran smoothly. "The MTA was great at helping us maintain the test bed, doing as much as they could in our physical absence," Vian says.
Calling on industry
Another crucial aspect of the program was to connect with the greater chemical and biological industrial community to solicit their sensors for testing. These partnerships reduced the cost for DHS to bring new sensing technologies into the project, and, in return, participants gained a testing and data collection opportunity within the challenging NYC subway environment.
The team ultimately fielded 16 different sensors, each with varying degrees of maturity, that operated through a range of methods, such as ultraviolet laser–induced fluorescence, polymerase chain reaction, and long-wave infrared spectrometry.
"The partners appreciated the unique data they got and the opportunity to work with the MTA and experience an environment and customer base that they may not have anticipated before," Vian says.
The team finished testing in 2024 and has delivered the final report to the DHS. The MTA will use the report to help expand their PROTECT chemical detection system (originally developed by Argonne National Laboratory) from Grand Central Station into adjacent stations. They expect to complete this work in 2026.
"The value of this program cannot be overstated. This partnership with DHS and MIT Lincoln Laboratory has led to the identification of the best-suited systems for the MTA’s unique operating environment," says Michael Gemelli, director of chemical, biological, radiological, and nuclear/WMD detection and mitigation at the New York MTA.
"Other transit authorities can leverage these results to start building effective chemical and biological defense systems for their own specific spaces and threat priorities," adds Benjamin Ervin, leader of Lincoln Laboratory's Counter–WMD Systems Group. "Specific test and evaluation within the operational environment of interest, however, is always recommended to ensure defense system objectives are met."
Building these types of decision-making reports for airborne chemical and biological sensing has been a part of Lincoln Laboratory's mission since the mid-1990s. The laboratory also helped to define priorities in the field when DHS was forming in the early 2000s.
Beyond this study, the Lincoln Laboratory is leading several other projects focused on forecasting the impact of novel chemical and biological threats within multiple domains — military, space, agriculture, health, etc. — and on prototyping rapid, autonomous, high-confidence biological identification capabilities for the homeland to provide actionable evidence of hazardous environments.
Lincoln Laboratory staff member Kevin Geisel places sampling equipment near the 42nd Street Shuttle at Grand Central Station to test airborne threat–mitigation strategies.
A multiyear program at MIT Lincoln Laboratory to characterize how biological and chemical vapors and aerosols disperse through the New York City subway system is coming to a close. The program, part of the U.S. Department of Homeland Security (DHS) Science and Technology Directorate's Urban Area Security Initiative, builds on other efforts at Lincoln Laboratory to detect chemical and biological threats, validate air dispersion models, and improve emergency protocols in urban areas in case of an airborne attack. The results of this program will inform the New York Metropolitan Transportation Authority (MTA) on how best to install an efficient, cost-effective system for airborne threat detection and mitigation throughout the subway. On a broader scale, the study will help the national security community understand pragmatic chemical and biological defense options for mass transit, critical facilities, and special events.
Trina Vian from the laboratory's Counter–Weapons of Mass Destruction (WMD) Systems Group led this project, which she says had as much to do with air flow and sensors as it did with MTA protocols and NYC commuters. "There are real dangers associated with panic during an alarm. People can get hurt during mass evacuation, or lose trust in a system and the authorities that administer that system, if there are false alarms," she says. "A novel aspect of our project was to investigate effective low-regret response options, meaning those with little operational consequence to responding to a false alarm."
Currently, depending on the severity of the alarm, the MTA's response can include stopping service and evacuating passengers and employees.
A complex environment for testing
For the program, which started in 2019, Vian and her team collected data on how chemical and biological sensors performed in the subway, what factors affected sensor accuracy, and how different mitigation protocols fared in stopping an airborne threat from spreading and removing the threat from a contaminated location. For their tests, they released batches of a safe, custom-developed aerosol simulant within Grand Central Station that they could track with DNA barcodes. Each batch had a different barcode, which allowed the team to differentiate among them and quantitatively assess different combinations of mitigation strategies.
To control and isolate air flow, the team tested static air curtains as well as air filtration systems. They also tested a spray knockdown system developed by Sandia National Laboratories designed to reduce and isolate particulate hazards in large volume areas. The system sprays a fine water mist into the tunnels that attaches to threat particulates and uses gravity to rain out the threat material. The spray contains droplets of a particular size and concentration, and with an applied electrostatic field. The original idea for the system was adapted from the coal mining industry, which used liquid sprayers to reduce the amount of inhalable soot.
The tests were done in a busy environment, and the team was required to complete trainings on MTA protocols such as track safety and how to interact with the public.
"We had long and sometimes very dirty days," says Jason Han of the Counter–WMD Systems Group, who collected measurements in the tunnels and analyzed the data. "We all wore bright orange contractor safety vests, which made people think we were official employees of the MTA. We would often get approached by people asking for directions!"
At times, issues such as power outages or database errors could disrupt data capture.
"We learned fairly early on that we had to capture daily data backups and keep a daily evolving master list of unique sensor identifiers and locations," says fellow team member Cassie Smith. "We developed workflows and wrote scripts to help automate the process, which ensured successful sensor data capture and attribution."
The team also worked closely with the MTA to make sure their tests and data capture ran smoothly. "The MTA was great at helping us maintain the test bed, doing as much as they could in our physical absence," Vian says.
Calling on industry
Another crucial aspect of the program was to connect with the greater chemical and biological industrial community to solicit their sensors for testing. These partnerships reduced the cost for DHS to bring new sensing technologies into the project, and, in return, participants gained a testing and data collection opportunity within the challenging NYC subway environment.
The team ultimately fielded 16 different sensors, each with varying degrees of maturity, that operated through a range of methods, such as ultraviolet laser–induced fluorescence, polymerase chain reaction, and long-wave infrared spectrometry.
"The partners appreciated the unique data they got and the opportunity to work with the MTA and experience an environment and customer base that they may not have anticipated before," Vian says.
The team finished testing in 2024 and has delivered the final report to the DHS. The MTA will use the report to help expand their PROTECT chemical detection system (originally developed by Argonne National Laboratory) from Grand Central Station into adjacent stations. They expect to complete this work in 2026.
"The value of this program cannot be overstated. This partnership with DHS and MIT Lincoln Laboratory has led to the identification of the best-suited systems for the MTA’s unique operating environment," says Michael Gemelli, director of chemical, biological, radiological, and nuclear/WMD detection and mitigation at the New York MTA.
"Other transit authorities can leverage these results to start building effective chemical and biological defense systems for their own specific spaces and threat priorities," adds Benjamin Ervin, leader of Lincoln Laboratory's Counter–WMD Systems Group. "Specific test and evaluation within the operational environment of interest, however, is always recommended to ensure defense system objectives are met."
Building these types of decision-making reports for airborne chemical and biological sensing has been a part of Lincoln Laboratory's mission since the mid-1990s. The laboratory also helped to define priorities in the field when DHS was forming in the early 2000s.
Beyond this study, the Lincoln Laboratory is leading several other projects focused on forecasting the impact of novel chemical and biological threats within multiple domains — military, space, agriculture, health, etc. — and on prototyping rapid, autonomous, high-confidence biological identification capabilities for the homeland to provide actionable evidence of hazardous environments.
Lincoln Laboratory staff member Kevin Geisel places sampling equipment near the 42nd Street Shuttle at Grand Central Station to test airborne threat–mitigation strategies.
From the wars in Gaza and Ukraine to protracted crises in Africa, geopolitical tensions abound in today’s turbulent world. But there is a bright spot closer to home.
Guided by the basic principles of non-interference, non-aggression, decision-making through consensus, and quiet diplomacy, the Association of Southeast Asian Nations (ASEAN) has remained a shining example of hope in the region, said ASEAN Secretary-General Dr Kao Kim Hourn.
“History has taught us that peace is not a natural state of affairs – it is sustained by restraint, dialogue, diplomacy, and a shared commitment to order,” noted Dr Kao at a lecture organised by the NUS Centre for International Law (CIL) on 4 August 2025 that was attended by about 130 participants, including policymakers, diplomats, government officials as well as members of academia and the private sector from Singapore and the region.
“ASEAN’s journey stands as a profound testament to the tenacity required to make regionalism and multilateralism work.”
ASEAN is an area of focus for CIL, particularly in its role as a pillar of international law, said CIL Director, Dr Nilufer Oral. In her welcome remarks at the annual CIL-NUS ASEAN Distinguished Lecture, she hailed ASEAN as “an important building block in the international legal system”.
Echoing this, Dr Kao added: “Even in times of difficulty, we do not abandon our faith in the power of diplomacy, a culture of dialogue, and a sense of shared purpose. That is the ‘ASEAN Way’, and it remains a source of light in an increasingly uncertain and polarised world.”
His lecture, titled “ASEAN: A Bright Spot in a Darkening World”, focused on the continued relevance of the bloc, which was founded in 1967. But the brightness can dim occasionally.
“The recent flare-up along the Cambodia-Thailand border should serve as a sobering wake-up call…we can neither afford complacency, nor take peace for granted,” stressed the Cambodian diplomat, who has been ASEAN’s Secretary-General since 2023.
Clashes between the two countries in July de-escalated after an intervention by neighbouring leader Mr Anwar Ibrahim, Prime Minister of Malaysia, who is the current ASEAN Chair.
Unity amid uncertainty
Dr Kao emphasised ASEAN’s commitment to future-proofing the region, as set out in the “ASEAN 2045” agenda adopted at the 46th ASEAN Summit in May 2025.
“For the first time in ASEAN’s history, we have articulated a 20-year outlook that anticipates the global megatrends already reshaping the international system,” he said, highlighting the challenges of climate change, demographic shifts, and technological disruptions.
“‘ASEAN 2045’ seeks not just to respond to them, but also to harness them in shaping a dynamic, inclusive, and future-ready region.”
Other challenges threatening the rules-based world order include the rising tides of unilateralism, fragmentation, and protectionism, with Dr Kao calling for concerted efforts and vigilance to sustain this relationship of trust.
Such efforts are especially crucial given the outsized impact of external forces on the region.
For example, Dr Kao acknowledged that US President Donald Trump’s support was crucial to securing peace in the Indo-Pacific. But he said ASEAN had to make known to the US that its recent tariffs had caused “a lot of uncertainty” in the region, especially in the private sector.
“ASEAN has been trying to respond collectively as a region to the US,” said Dr Kao in his reply to a question on the rules-based order, adding that the bloc is also working to boost trade within the region and with partners like China, Korea, and India.
A beacon in a darkening world
Amid geopolitical tensions, ASEAN’s growing appeal is evident as more non-Southeast Asian countries join ASEAN-led partnerships. It has not only championed peace, but also “reinforced its position as a cornerstone of regional economic success”, said Dr Kao.
Currently the world’s fifth-largest economy, ASEAN is projected to become the fourth-largest by 2030. In 2023, it secured a record US$230 billion in foreign direct investment.
Asked if ASEAN was in need of reform, Dr Kao conceded the bloc had its “shortcomings” such as not delivering results quickly enough, and was working to address them. On a separate note, he highlighted that ASEAN was making inroads in AI-related issues ranging from governance to ethics, and was gaining “a lot of momentum” in cybersecurity.
Weighing in, Mr Ong Keng Yong, Singapore’s Ambassador-at-Large and current CIL Governing Board Member, who moderated the dialogue, added that countries needed more political will to enforce cybersecurity-related laws.
Participants believed the topics discussed were timely. “The lecture was an accurate reflection of the processes and the recent priorities of ASEAN,” said Ms Diane Shayne D. Lipana, Acting Director of ASEAN Affairs at the Philippines’ Department of Foreign Affairs.
In these uncertain times, ASEAN’s role is starker than ever. “We will redouble our efforts to ensure that ASEAN remains – steadily and resolutely – a bright beacon in a darkening world,” Dr Kao added.
From toddlers’ timeouts to criminals’ prison sentences, punishment reinforces social norms, making it known that an offender has done something unacceptable. At least, that is usually the intent — but the strategy can backfire. When a punishment is perceived as too harsh, observers can be left with the impression that an authority figure is motivated by something other than justice.
It can be hard to predict what people will take away from a particular punishment, because everyone makes their own inferences not just about the acceptability of the act that led to the punishment, but also the legitimacy of the authority who imposed it. A new computational model developed by scientists at MIT’s McGovern Institute for Brain Research makes sense of these complicated cognitive processes, recreating the ways people learn from punishment and revealing how their reasoning is shaped by their prior beliefs.
Their work, reported Aug. 4 in the journal PNAS, explains how a single punishment can send different messages to different people, and even strengthen the opposing viewpoints of groups who hold different opinions about authorities or social norms.
“The key intuition in this model is the fact that you have to be evaluating simultaneously both the norm to be learned and the authority who’s punishing,” says McGovern investigator and John W. Jarve Professor of Brain and Cognitive Sciences Rebecca Saxe, who led the research. “One really important consequence of that is even where nobody disagrees about the facts — everybody knows what action happened, who punished it, and what they did to punish it — different observers of the same situation could come to different conclusions.”
For example, she says, a child who is sent to timeout after biting a sibling might interpret the event differently than the parent. One might see the punishment as proportional and important, teaching the child not to bite. But if the biting, to the toddler, seemed a reasonable tactic in the midst of a squabble, the punishment might be seen as unfair, and the lesson will be lost.
People draw on their own knowledge and opinions when they evaluate these situations — but to study how the brain interprets punishment, Saxe and graduate student Setayesh Radkani wanted to take those personal ideas out of the equation. They needed a clear understanding of the beliefs that people held when they observed a punishment, so they could learn how different kinds of information altered their perceptions. So Radkani set up scenarios in imaginary villages where authorities punished individuals for actions that had no obvious analog in the real world.
Participants observed these scenarios in a series of experiments, with different information offered in each one. In some cases, for example, participants were told that the person being punished was either an ally or a competitor of the authority, whereas in other cases, the authority’s possible bias was left ambiguous.
“That gives us a really controlled setup to vary prior beliefs,” Radkani explains. “We could ask what people learn from observing punitive decisions with different severities, in response to acts that vary in their level of wrongness, by authorities that vary in their level of different motives.”
For each scenario, participants were asked to evaluate four factors: how much the authority figure cared about justice; the selfishness of the authority; the authority’s bias for or against the individual being punished; and the wrongness of the punished act. The research team asked these questions when participants were first introduced to the hypothetical society, then tracked how their responses changed after they observed the punishment. Across the scenarios, participants’ initial beliefs about the authority and the wrongness of the act shaped the extent to which those beliefs shifted after they observed the punishment.
Radkani was able to replicate these nuanced interpretations using a cognitive model framed around an idea that Saxe’s team has long used to think about how people interpret the actions of others. That is, to make inferences about others’ intentions and beliefs, we assume that people choose actions that they expect will help them achieve their goals.
To apply that concept to the punishment scenarios, Radkani developed a model that evaluates the meaning of a punishment (an action aimed at achieving a goal of the authority) by considering the harm associated with that punishment; its costs or benefits to the authority; and its proportionality to the violation. By assessing these factors, along with prior beliefs about the authority and the punished act, the model was able to predict people’s responses to the hypothetical punishment scenarios, supporting the idea that people use a similar mental model. “You need to have them consider those things, or you can’t make sense of how people understand punishment when they observe it,” Saxe says.
Even though the team designed their experiments to preclude preconceived ideas about the people and actions in their imaginary villages, not everyone drew the same conclusions from the punishments they observed. Saxe’s group found that participants’ general attitudes toward authority influenced their interpretation of events. Those with more authoritarian attitudes — assessed through a standard survey — tended to judge punished acts as more wrong and authorities as more motivated by justice than other observers.
“If we differ from other people, there’s a knee-jerk tendency to say, ‘either they have different evidence from us, or they’re crazy,’” Saxe says. Instead, she says, “It’s part of the way humans think about each other’s actions.”
“When a group of people who start out with different prior beliefs get shared evidence, they will not end up necessarily with shared beliefs. That’s true even if everybody is behaving rationally,” says Saxe.
This way of thinking also means that the same action can simultaneously strengthen opposing viewpoints. The Saxe lab’s modeling and experiments showed that when those viewpoints shape individuals’ interpretations of future punishments, the groups’ opinions will continue to diverge. For instance, a punishment that seems too harsh to a group who suspects an authority is biased can make that group even more skeptical of the authority’s future actions. Meanwhile, people who see the same punishment as fair and the authority as just will be more likely to conclude that the authority figure’s future actions are also just.
“You will get a vicious cycle of polarization, staying and actually spreading to new things,” says Radkani.
The researchers say their findings point toward strategies for communicating social norms through punishment. “It is exactly sensible in our model to do everything you can to make your action look like it’s coming out of a place of care for the long-term outcome of this individual, and that it’s proportional to the norm violation they did,” Saxe says. “That is your best shot at getting a punishment interpreted pedagogically, rather than as evidence that you’re a bully.”
Nevertheless, she says that won’t always be enough. “If the beliefs are strong the other way, it’s very hard to punish and still sustain a belief that you were motivated by justice.”
Joining Saxe and Radkani on the paper is Joshua Tenenbaum, MIT professor of brain and cognitive sciences. The study was funded, in part, by the Patrick J McGovern Foundation.
McGovern Institute researchers show that the same punishment can either build respect for authority or deepen distrust — depending on what people already believe.
From toddlers’ timeouts to criminals’ prison sentences, punishment reinforces social norms, making it known that an offender has done something unacceptable. At least, that is usually the intent — but the strategy can backfire. When a punishment is perceived as too harsh, observers can be left with the impression that an authority figure is motivated by something other than justice.
It can be hard to predict what people will take away from a particular punishment, because everyone makes their own inferences not just about the acceptability of the act that led to the punishment, but also the legitimacy of the authority who imposed it. A new computational model developed by scientists at MIT’s McGovern Institute for Brain Research makes sense of these complicated cognitive processes, recreating the ways people learn from punishment and revealing how their reasoning is shaped by their prior beliefs.
Their work, reported Aug. 4 in the journal PNAS, explains how a single punishment can send different messages to different people, and even strengthen the opposing viewpoints of groups who hold different opinions about authorities or social norms.
“The key intuition in this model is the fact that you have to be evaluating simultaneously both the norm to be learned and the authority who’s punishing,” says McGovern investigator and John W. Jarve Professor of Brain and Cognitive Sciences Rebecca Saxe, who led the research. “One really important consequence of that is even where nobody disagrees about the facts — everybody knows what action happened, who punished it, and what they did to punish it — different observers of the same situation could come to different conclusions.”
For example, she says, a child who is sent to timeout after biting a sibling might interpret the event differently than the parent. One might see the punishment as proportional and important, teaching the child not to bite. But if the biting, to the toddler, seemed a reasonable tactic in the midst of a squabble, the punishment might be seen as unfair, and the lesson will be lost.
People draw on their own knowledge and opinions when they evaluate these situations — but to study how the brain interprets punishment, Saxe and graduate student Setayesh Radkani wanted to take those personal ideas out of the equation. They needed a clear understanding of the beliefs that people held when they observed a punishment, so they could learn how different kinds of information altered their perceptions. So Radkani set up scenarios in imaginary villages where authorities punished individuals for actions that had no obvious analog in the real world.
Participants observed these scenarios in a series of experiments, with different information offered in each one. In some cases, for example, participants were told that the person being punished was either an ally or a competitor of the authority, whereas in other cases, the authority’s possible bias was left ambiguous.
“That gives us a really controlled setup to vary prior beliefs,” Radkani explains. “We could ask what people learn from observing punitive decisions with different severities, in response to acts that vary in their level of wrongness, by authorities that vary in their level of different motives.”
For each scenario, participants were asked to evaluate four factors: how much the authority figure cared about justice; the selfishness of the authority; the authority’s bias for or against the individual being punished; and the wrongness of the punished act. The research team asked these questions when participants were first introduced to the hypothetical society, then tracked how their responses changed after they observed the punishment. Across the scenarios, participants’ initial beliefs about the authority and the wrongness of the act shaped the extent to which those beliefs shifted after they observed the punishment.
Radkani was able to replicate these nuanced interpretations using a cognitive model framed around an idea that Saxe’s team has long used to think about how people interpret the actions of others. That is, to make inferences about others’ intentions and beliefs, we assume that people choose actions that they expect will help them achieve their goals.
To apply that concept to the punishment scenarios, Radkani developed a model that evaluates the meaning of a punishment (an action aimed at achieving a goal of the authority) by considering the harm associated with that punishment; its costs or benefits to the authority; and its proportionality to the violation. By assessing these factors, along with prior beliefs about the authority and the punished act, the model was able to predict people’s responses to the hypothetical punishment scenarios, supporting the idea that people use a similar mental model. “You need to have them consider those things, or you can’t make sense of how people understand punishment when they observe it,” Saxe says.
Even though the team designed their experiments to preclude preconceived ideas about the people and actions in their imaginary villages, not everyone drew the same conclusions from the punishments they observed. Saxe’s group found that participants’ general attitudes toward authority influenced their interpretation of events. Those with more authoritarian attitudes — assessed through a standard survey — tended to judge punished acts as more wrong and authorities as more motivated by justice than other observers.
“If we differ from other people, there’s a knee-jerk tendency to say, ‘either they have different evidence from us, or they’re crazy,’” Saxe says. Instead, she says, “It’s part of the way humans think about each other’s actions.”
“When a group of people who start out with different prior beliefs get shared evidence, they will not end up necessarily with shared beliefs. That’s true even if everybody is behaving rationally,” says Saxe.
This way of thinking also means that the same action can simultaneously strengthen opposing viewpoints. The Saxe lab’s modeling and experiments showed that when those viewpoints shape individuals’ interpretations of future punishments, the groups’ opinions will continue to diverge. For instance, a punishment that seems too harsh to a group who suspects an authority is biased can make that group even more skeptical of the authority’s future actions. Meanwhile, people who see the same punishment as fair and the authority as just will be more likely to conclude that the authority figure’s future actions are also just.
“You will get a vicious cycle of polarization, staying and actually spreading to new things,” says Radkani.
The researchers say their findings point toward strategies for communicating social norms through punishment. “It is exactly sensible in our model to do everything you can to make your action look like it’s coming out of a place of care for the long-term outcome of this individual, and that it’s proportional to the norm violation they did,” Saxe says. “That is your best shot at getting a punishment interpreted pedagogically, rather than as evidence that you’re a bully.”
Nevertheless, she says that won’t always be enough. “If the beliefs are strong the other way, it’s very hard to punish and still sustain a belief that you were motivated by justice.”
Joining Saxe and Radkani on the paper is Joshua Tenenbaum, MIT professor of brain and cognitive sciences. The study was funded, in part, by the Patrick J McGovern Foundation.
McGovern Institute researchers show that the same punishment can either build respect for authority or deepen distrust — depending on what people already believe.
The U.S. Food and Drug Administration’s recent approval of the first CRISPR-Cas9–based gene therapy has marked a major milestone in biomedicine, validating genome editing as a promising treatment strategy for disorders like sickle cell disease, muscular dystrophy, and certain cancers.
CRISPR-Cas9, often likened to “molecular scissors,” allows scientists to cut DNA at targeted sites to snip, repair, or replace genes. But despite its power, Cas9 poses a critical safety risk: The active enzyme can linger in cells and cause unintended DNA breaks — so-called off-target effects — which may trigger harmful mutations in healthy genes.
Now, researchers in the labs of Ronald T. Raines, MIT professor of chemistry, and Amit Choudhary, professor of medicine at Harvard Medical School, have engineered a precise way to turn Cas9 off after its job is done — significantly reducing off-target effects and improving the clinical safety of gene editing. Their findings are detailed in a new paper published in the Proceedings of the National Academy of Sciences (PNAS).
“To ‘turn off’ Cas9 after it achieves its intended genome-editing outcome, we developed the first cell-permeable anti-CRISPR protein system,” says Raines, the Roger and Georges Firmenich Professor of Natural Products Chemistry. “Our technology reduces the off-target activity of Cas9 and increases its genome-editing specificity and clinical utility.”
The new tool — called LFN-Acr/PA — uses a protein-based delivery system to ferry anti-CRISPR proteins into human cells rapidly and efficiently. While natural Type II anti-CRISPR proteins (Acrs) are known to inhibit Cas9, their use in therapy has been limited because they’re often too bulky or charged to enter cells, and conventional delivery methods are too slow or ineffective.
LFN-Acr/PA overcomes these hurdles using a component derived from anthrax toxin to introduce Acrs into cells within minutes. Even at picomolar concentrations, the system shuts down Cas9 activity with remarkable speed and precision — boosting genome-editing specificity up to 40 percent.
Bradley L. Pentelute, MIT professor of chemistry, is an expert on the anthrax delivery system, and is also an author of the paper.
The implications of this advance are wide-ranging. With patent applications filed, LFN-Acr/PA represents a faster, safer, and more controllable means of harnessing CRISPR-Cas9, opening the door to more-refined gene therapies with fewer unintended consequences.
The research was supported by the National Institutes of Health and a Gilliam Fellowship from the Howard Hughes Medical Institute awarded to lead author Axel O. Vera, a graduate student in the Department of Chemistry.
A new tool developed by MIT researchers represents a faster, safer, and more controllable means of harnessing CRISPR-Cas9, opening the door to more-refined gene therapies with fewer unintended consequences.
The U.S. Food and Drug Administration’s recent approval of the first CRISPR-Cas9–based gene therapy has marked a major milestone in biomedicine, validating genome editing as a promising treatment strategy for disorders like sickle cell disease, muscular dystrophy, and certain cancers.
CRISPR-Cas9, often likened to “molecular scissors,” allows scientists to cut DNA at targeted sites to snip, repair, or replace genes. But despite its power, Cas9 poses a critical safety risk: The active enzyme can linger in cells and cause unintended DNA breaks — so-called off-target effects — which may trigger harmful mutations in healthy genes.
Now, researchers in the labs of Ronald T. Raines, MIT professor of chemistry, and Amit Choudhary, professor of medicine at Harvard Medical School, have engineered a precise way to turn Cas9 off after its job is done — significantly reducing off-target effects and improving the clinical safety of gene editing. Their findings are detailed in a new paper published in the Proceedings of the National Academy of Sciences (PNAS).
“To ‘turn off’ Cas9 after it achieves its intended genome-editing outcome, we developed the first cell-permeable anti-CRISPR protein system,” says Raines, the Roger and Georges Firmenich Professor of Natural Products Chemistry. “Our technology reduces the off-target activity of Cas9 and increases its genome-editing specificity and clinical utility.”
The new tool — called LFN-Acr/PA — uses a protein-based delivery system to ferry anti-CRISPR proteins into human cells rapidly and efficiently. While natural Type II anti-CRISPR proteins (Acrs) are known to inhibit Cas9, their use in therapy has been limited because they’re often too bulky or charged to enter cells, and conventional delivery methods are too slow or ineffective.
LFN-Acr/PA overcomes these hurdles using a component derived from anthrax toxin to introduce Acrs into cells within minutes. Even at picomolar concentrations, the system shuts down Cas9 activity with remarkable speed and precision — boosting genome-editing specificity up to 40 percent.
Bradley L. Pentelute, MIT professor of chemistry, is an expert on the anthrax delivery system, and is also an author of the paper.
The implications of this advance are wide-ranging. With patent applications filed, LFN-Acr/PA represents a faster, safer, and more controllable means of harnessing CRISPR-Cas9, opening the door to more-refined gene therapies with fewer unintended consequences.
The research was supported by the National Institutes of Health and a Gilliam Fellowship from the Howard Hughes Medical Institute awarded to lead author Axel O. Vera, a graduate student in the Department of Chemistry.
A new tool developed by MIT researchers represents a faster, safer, and more controllable means of harnessing CRISPR-Cas9, opening the door to more-refined gene therapies with fewer unintended consequences.
Materials research thrives across MIT, spanning disciplines and departments. Recent breakthroughs include strategies for securing sustainable supplies of nickel — critical to clean-energy technologies (Department of Materials Science and Engineering); the discovery of unexpected magnetism in atomically thin quantum materials (Department of Physics); and the development of adhesive coatings that reduce scarring around medical implants (departments of Mechanical Engineering and Civil and Environmental Engineering).
At the center of these efforts is the Materials Research Laboratory (MRL), a hub that connects and supports the Institute’s materials research community. “MRL serves as a home for the entire materials research community at MIT,” says C. Cem Tasan, who became director in April 2025. “Our goal is to make it easier for our faculty to conduct their extraordinary research,” adds Tasan, the POSCO Associate Professor of Metallurgy in the Department of Materials Science and Engineering.
A storied history
Established in 2017, the MRL brings together more than 30 researchers and builds on a 48-year legacy of innovation. It was formed through the merger of the MIT Materials Processing Center (MPC) and the Center for Materials Science and Engineering (CMSE), two institutions that helped lay the foundation for MIT’s global leadership in materials science.
Over the years, research supported by MPC and CMSE has led to transformative technologies and successful spinout companies. Notable examples include amsc, based on advances in superconductivity; OmniGuide, which developed cutting-edge optical fiber technologies; and QD Vision, a pioneer in quantum dot technology acquired by Samsung in 2016. Another landmark achievement was the development of the first germanium laser to operate at room temperature — a breakthrough now used in optical communications.
Enabling research through partnership and support
MRL is launching targeted initiatives to connect MIT researchers with industry partners around specific technical challenges. Each initiative will be led by a junior faculty member working closely with MRL to identify a problem that aligns with their research expertise and is relevant to industry needs.
Through multi-year collaborations with participating companies, faculty can explore early-stage solutions in partnership with postdocs or graduate students. These initiatives are designed to be agile and interdisciplinary, with the potential to grow into major, long-term research programs.
Behind-the-scenes support, front-line impact
MRL provides critical infrastructure that enables faculty to focus on discovery, not logistics. “MRL works silently in the background, where every problem a principal investigator has related to the administration of materials research is solved with efficiency, good organization, and minimum effort,” says Tasan.
This quiet but powerful support spans multiple areas:
The finance team manages grants and helps secure new funding opportunities.
The human resources team supports the hiring of postdocs.
The communications team amplifies the lab’s impact through compelling stories shared with the public and funding agencies.
The events team plans and coordinates conferences, seminars, and symposia that foster collaboration within the MIT community and with external partners.
Together, these functions ensure that research at MRL runs smoothly and effectively — from initial idea to lasting innovation.
Leadership with a vision
Tasan, who also leads a research group focused on metallurgy, says he took on the directorship because “I thrive on new challenges.” He also saw the role as an opportunity to contribute more broadly to MIT.
“I believe MRL can play an even greater role in advancing materials research across the Institute, and I’m excited to help make that happen,” he says.
MIT’s Great Dome rises against the Boston skyline behind the Vannevar Bush Building. Also known as Building 13, the structure houses the offices and many of the facilities of MIT’s Materials Research Laboratory.
Materials research thrives across MIT, spanning disciplines and departments. Recent breakthroughs include strategies for securing sustainable supplies of nickel — critical to clean-energy technologies (Department of Materials Science and Engineering); the discovery of unexpected magnetism in atomically thin quantum materials (Department of Physics); and the development of adhesive coatings that reduce scarring around medical implants (departments of Mechanical Engineering and Civil and Environmental Engineering).
At the center of these efforts is the Materials Research Laboratory (MRL), a hub that connects and supports the Institute’s materials research community. “MRL serves as a home for the entire materials research community at MIT,” says C. Cem Tasan, who became director in April 2025. “Our goal is to make it easier for our faculty to conduct their extraordinary research,” adds Tasan, the POSCO Associate Professor of Metallurgy in the Department of Materials Science and Engineering.
A storied history
Established in 2017, the MRL brings together more than 30 researchers and builds on a 48-year legacy of innovation. It was formed through the merger of the MIT Materials Processing Center (MPC) and the Center for Materials Science and Engineering (CMSE), two institutions that helped lay the foundation for MIT’s global leadership in materials science.
Over the years, research supported by MPC and CMSE has led to transformative technologies and successful spinout companies. Notable examples include amsc, based on advances in superconductivity; OmniGuide, which developed cutting-edge optical fiber technologies; and QD Vision, a pioneer in quantum dot technology acquired by Samsung in 2016. Another landmark achievement was the development of the first germanium laser to operate at room temperature — a breakthrough now used in optical communications.
Enabling research through partnership and support
MRL is launching targeted initiatives to connect MIT researchers with industry partners around specific technical challenges. Each initiative will be led by a junior faculty member working closely with MRL to identify a problem that aligns with their research expertise and is relevant to industry needs.
Through multi-year collaborations with participating companies, faculty can explore early-stage solutions in partnership with postdocs or graduate students. These initiatives are designed to be agile and interdisciplinary, with the potential to grow into major, long-term research programs.
Behind-the-scenes support, front-line impact
MRL provides critical infrastructure that enables faculty to focus on discovery, not logistics. “MRL works silently in the background, where every problem a principal investigator has related to the administration of materials research is solved with efficiency, good organization, and minimum effort,” says Tasan.
This quiet but powerful support spans multiple areas:
The finance team manages grants and helps secure new funding opportunities.
The human resources team supports the hiring of postdocs.
The communications team amplifies the lab’s impact through compelling stories shared with the public and funding agencies.
The events team plans and coordinates conferences, seminars, and symposia that foster collaboration within the MIT community and with external partners.
Together, these functions ensure that research at MRL runs smoothly and effectively — from initial idea to lasting innovation.
Leadership with a vision
Tasan, who also leads a research group focused on metallurgy, says he took on the directorship because “I thrive on new challenges.” He also saw the role as an opportunity to contribute more broadly to MIT.
“I believe MRL can play an even greater role in advancing materials research across the Institute, and I’m excited to help make that happen,” he says.
MIT’s Great Dome rises against the Boston skyline behind the Vannevar Bush Building. Also known as Building 13, the structure houses the offices and many of the facilities of MIT’s Materials Research Laboratory.
Health experts urge policies that buoy families: lower living costs, affordable childcare, help for older parents who want more kids
Alvin Powell
Harvard Staff Writer
5 min read
Financial-incentive programs for prospective parents don’t work as a way to reverse falling birth rates, Harvard health experts said on Tuesday about a policy option that has been in the news in recent months.
Instead, they said, a more effective approach would be to target issues that make parenting difficult: the high cost of living, a lack of affordable childcare, and better options for older parents who still want to see their families grow.
The discussion, held at The Studio at Harvard T.H. Chan School of Public Health, came in the wake of a July report from the Centers of Disease Control and Prevention that showed that the U.S. fertility rate was down 22 percent since the last peak in 2007.
Ana Langer, professor of the practice of public health, emerita, said the causes of fertility decline are numerous, complex, and difficult to reverse.
Surveys investigating why people might not want children cite things such as the cost of living, negative medical experiences from previous pregnancies, and wariness about major global issues such as climate change. In fact, she said, many survey respondents are surprised that declining fertility is even a problem and say they’re more concerned about overpopulation and its impacts on the planet.
The landscape is complicated by the fact that U.S. society has changed significantly since the 1960s, when expectations were that virtually everyone wanted to raise a family. Today, she said, people feel free to focus on careers rather than families, and there is far greater acceptance of those who decide never to have children.
Margaret Anne McConnell, the Chan School’s Bruce A. Beal, Robert L. Beal and Alexander S. Beal Professor of Global Health Economics, said some of the factors that have contributed to the declining birth rate reflect positive cultural shifts.
Fertility rates are falling fastest, for example, in the youngest demographic, girls age 15 to 20. Teen pregnancy has been long considered a societal ill and is associated with difficult pregnancies, poor infant health, interrupted education, and poor job prospects.
Other factors include the widespread availability of birth control, which gives women more reproductive choice, as well as the increasing share of women in higher education and the workforce.
Today people feel free to focus on careers rather than families, and there is far greater acceptance of those who decide never to have children.
Margaret Anne McConnell
McConnell said some stop short of having the number of children they desire, due to fertility, medical, and other issues. One way to address declining fertility, she said, would be to find ways to enable those parents to have the number of children they wish.
“Any time we see people being able to make fertility choices that suit their family, I think that’s a success,” McConnell said. “I think people choosing to have children later in life is also a success. … To the extent that we can make it possible for people to reach whatever their desired family size is, I think that that would be a societal priority.”
The event, “America’s declining birth rate: A public health perspective,” brought together Langer, McConnell, and Henning Tiemeier, the Chan School’s Sumner and Esther Feldberg Professor of Maternal and Child Health.
Addressing the declining birth rate has become a focus of the current administration — President Trump has floated the idea of a $5,000 “baby bonus” and $1,000 “Trump Accounts” that were part of the “One Big Beautiful Bill” approved this summer.
Panelists at the virtual event pointed out that a declining birth rate is not just a problem in the U.S. It has been declining in many countries around the world, and for many of the same reasons. As people — particularly women — become better educated and wealthier, they tend to choose smaller families than their parents and grandparents.
Tiemeier said that changing societies and cultures have altered the very nature of relationships between men and women. He added sex education to the list of key changes that have fueled the birth-rate decline, particularly for teen pregnancies. The question of whether declining fertility is a problem is too simple for such a complex issue, he said.
In a country with a growing population, where women have, on average, three children, the birth rate falling to 2½, slightly over the replacement value, would be beneficial economically, ensuring more workers to support the population as it ages.
Countries with a birth rate below 1, whose population is already contracting, risk too few workers to fuel their economy, not to mention the social and societal impacts of a lack of young people.
Tiemeier and McConnell said that other countries have tried simply paying people to have more children, and it doesn’t work. Even if the declining birth rate was considered a catastrophe, McConnell said, governments haven’t yet found levers that can bring it back up.
That doesn’t mean there aren’t things government can do to help parents navigate a difficult and expensive time in life. Programs to lower the cost of childcare have been instituted in some cities and states, and more can be done.
Tiemeier said both Republicans and Democrats are interested in supporting families, though their approaches may be different. So this may be a rare issue on which they could find common ground.
Other areas of associatedneed include maternal health — a significant part of the population lives in healthcare “deserts” far from medical care. Programs designed to reach those areas, as well as a national parental-leave policy, would help young families navigate that time.
“Any measure that we take will have a modest effect, because there are so many things contributing to this,” Tiemeier said. “To say that we are waiting and looking for a measure that has a big effect is an illusion. There are no big effects in this discussion.”
Princeton welcomed U.S. military veterans and service members to a rigorous academic boot camp this summer, the University's 9th year hosting the program.
Pierre E. Dupont holds a transcatheter valve repair device with a motorized catheter drive system, replacing the traditional manual handle.
Niles Singer/Harvard Staff Photographer
Alvin Powell
Harvard Staff Writer
8 min read
Medical robotics expert says coming autonomous devices will augment skills of clinicians (not replace them), extend reach of cutting-edge procedures
The robot doctor will see you now? Not for the foreseeable future, anyway.
Medical robots today are pretty dumb, typically acting as extensions of a surgeon’s hands rather than taking over for them. Pierre E. Dupont, professor of surgery at Harvard Medical School, co-authored a Viewpoint article in the journal Science Robotics last month saying that autonomous surgical robots that learn as they go are on the way.
But their likely impact will be to augment the skills of clinicians, not replace them, and to extend the reach of cutting-edge advances beyond the urban campuses of academic medical centers where they typically emerge.
In this edited conversation, Dupont, who is also chief of pediatric cardiac bioengineering at Boston Children’s Hospital, spoke with the Gazette about the areas most likely to see surgical robots operating autonomously, and some of the hurdles to their adoption.
You note that robot autonomy and learning system technologies are being used in manufacturing as well as medical settings. How does that work?
Yes, in just about every other field, robots are used as autonomous agents to replace the manpower that would be needed to perform a task. But in many surgical applications, like laparoscopy, they’re used as extensions of the clinician’s hand. They improve ergonomics for the clinician, but there’s still some question as to how much they’re improving the experience for the patient.
Outside of medicine, teleoperation, in which the operator uses a mechanical input device to directly control robot motion, is only used in remote or hostile environments like space or the ocean floor. But it’s how laparoscopic robots are controlled.
The hot extension today, which ties into hospital economics, is telesurgery, where you might have a Boston-based hospital and satellite facilities in the suburbs. Rather than the clinician being with the patient in the operating room, you would have robots at the satellite hospitals, and the clinician could stay at the main hospital and connect remotely to perform procedures. That’s trending today, but it’s not automation.
What would an automated procedure look like?
Some simpler medical procedures are already automated using non-learning methods.
In joint replacement, for example, you need to create a cavity in the bone to place an implant. Historically, the skill of the clinician determined how well the implant fit and whether the joint alignment was appropriate.
But there’s a strong parallel with machining processes, which was the impetus for creating robots to mill cavities in the bone — leading to more accurate and consistent outcomes. That’s a big market today in orthopedics.
The autonomy of the milling robot is possible because it’s a well-defined problem and easy to model. You create a 3D model of the bones and a clinician can sit at a computer interface and use software to define exactly how the implant will be aligned and how much bone will be removed. So everything can be modeled and preplanned — the robot is basically just following the plan. It’s a dumb form of automation.
“Rather than the clinician being with the patient in the operating room, you would have robots at the satellite hospitals, and the clinician could stay at the main hospital and connect remotely to perform procedures. ”
Pierre E. Dupont
That’s because of the nature of the bone and the implant. The dimensions are known. Nothing’s moving like it would if you were operating on a beating heart.
That’s right, although I think transcatheter cardiac procedures and endovascular procedures in general are actually great targets for automation.
The geometry is not as well-defined as orthopedic surgery, but it’s much simpler than in laparoscopy or any type of open surgery where you’re dealing with soft tissue.
In soft tissue surgery, you’re using forceps, scalpel, and suture to grasp, cut, and sew tissue. The clinician, through experience, has a model in their head of how hard they can squeeze the tissue without damaging it, how the tissue will deform when they pull on it and cut it, and how deeply they have to place the needle while suturing.
Those things are much harder to model with classical engineering techniques than milling bone.
How much of the progress in this area is due to the speed of technological development versus acceptance among clinicians and patients?
If you just think about robotics, the amount of acceptance is surprising. A lot of academic clinicians love to play with new toys. Many patients, perhaps incorrectly, assume that the clinician must do a better job with this incredible piece of equipment.
Hospitals want to know about costs. They don’t necessarily care if the clinician’s back is a little less sore at the end of the day because they used a robot. They want to know whether the patient had fewer complications and was discharged sooner — in other words, better care for less money. That’s the tough aspect of this: Robots cost more to make and roll out than most other medical equipment.
When you talk about the acceptance of medical robot automation, clinicians may be a little reluctant because they wonder whether they are going to lose their jobs. But it’s actually like giving them a highly effective tool that can raise their skill level.
There are a lot of clinicians who may only see a particular procedure 10 times a year. If you think of anything that’s complex in life that you only do once a month, you’re not going to do that as well and feel as confident as if you did it every day.
So, if the robot is not replacing them, but acting like a highly experienced colleague whom you can communicate with, and who can coach you through the procedure, explaining, “Now I’m going to do this.” Or ask, “Do you think I should do it this way?” or “Should I put this device a little to the left?” I think there’ll be acceptance. If you have a system that can bend a clinician’s learning curve down and raise their proficiency level very quickly, every clinician will want one.
How important are recent advances in large language models and other forms of AI in the discussion of autonomy?
These advances are what is going to enable progress in medical robot autonomy. We’re working on transcatheter valve repair procedures that right now are done by hand. Clinicians need to do a lot of these procedures to get good at them — and to stay that way.
We have seen in my lab that adding a robotic teleoperation makes them easier. But if we can add learning-based autonomous functionality, we could make it possible for these procedures to be safely offered in low-volume facilities.
That’s important because a significant concern is that you get the best care and the newest treatments in the big urban areas that have academic medical centers. But many people don’t live in those areas and even though they could travel to get treatment, they want to get treated locally.
So, if you can enable community hospitals to offer these services, even though they’re low-volume, that’s an opportunity for a much larger fraction of the population to take advantage of the best medical care.
When we look further out, do you have any doubt that medicine will become more autonomous?
I think there’s a lot of opportunity for increasing levels of autonomy, but it has to be done gradually. You want to make sure that you’re regulating it so that patients are always safe.
There will be unanticipated events, such as unusual anatomical variations, that the system hasn’t been trained for. You need to make sure that the system will catch these problems as they come up — it needs to recognize when it’s out of its depth.
Currently, that’s a research topic in learning systems — there is technology that still needs to be developed. But the revolution over the last few years in foundation models has shown us how much is possible.
Ultimately, will there be a case where there’s no clinician involved? We don’t have to worry about that question yet.
You mentioned that these systems are expensive. Will costs come down the more they’re used?
The challenge is that medical devices are designed and approved for specific procedures. If you want to create a new medical device, you need to look at how many procedures are performed per year, and what the reimbursements are for those procedures.
For any medical device — not a robot — the smallest realistic market size is $100 million in sales per year. And if you want to raise venture capital funding, the market has to be at least a billion dollars.
Since medical robots are so expensive to develop, that means you should have a multibillion-dollar market for a medical robot. Those markets do exist: Laparoscopy and orthopedics are current examples. Endovascular procedures including heart valve repair and replacement are another that I am targeting.
An important factor for each of these three examples is that the robot is a platform. It can be used for a variety of procedures and so has a much larger addressable market than a robot that can only do one thing.
Optical frequency combs are specially designed lasers that act like rulers to accurately and rapidly measure specific frequencies of light. They can be used to detect and identify chemicals and pollutants with extremely high precision.
Frequency combs would be ideal for remote sensors or portable spectrometers because they can enable accurate, real-time monitoring of multiple chemicals without complex moving parts or external equipment.
But developing frequency combs with high enough bandwidth for these applications has been a challenge. Often, researchers must add bulky components that limit scalability and performance.
Now, a team of MIT researchers has demonstrated a compact, fully integrated device that uses a carefully crafted mirror to generate a stable frequency comb with very broad bandwidth. The mirror they developed, along with an on-chip measurement platform, offers the scalability and flexibility needed for mass-producible remote sensors and portable spectrometers. This development could enable more accurate environmental monitors that can identify multiple harmful chemicals from trace gases in the atmosphere.
“The broader the bandwidth a spectrometer has, the more powerful it is, but dispersion is in the way. Here we took the hardest problem that limits bandwidth and made it the centerpiece of our study, addressing every step to ensure robust frequency comb operation,” says Qing Hu, Distinguished Professor in Electrical Engineering and Computer Science at MIT, principal investigator in the Research Laboratory of Electronics, and senior author on an open-access paper describing the work.
He is joined on the paper by lead author Tianyi Zeng PhD ’23; as well as Yamac Dikmelik of General Dynamics Mission Systems; Feng Xie and Kevin Lascola of Thorlabs Quantum Electronics; and David Burghoff SM ’09, PhD ’14, an assistant professor at the University of Texas at Austin. The research appears today in Light: Science and Applications.
Broadband combs
An optical frequency comb produces a spectrum of equally spaced laser lines, which resemble the teeth of a comb.
Scientists can generate frequency combs using several types of lasers for different wavelengths. By using a laser that produces long wave infrared radiation, such as a quantum cascade laser, they can use frequency combs for high-resolution sensing and spectroscopy.
In dual-comb spectroscopy (DCS), the beam of one frequency comb travels straight through the system and strikes a detector at the other end. The beam of the second frequency comb passes through a chemical sample before striking the same detector. Using the results from both combs, scientists can faithfully replicate the chemical features of the sample at much lower frequencies, where signals can be easily analyzed.
The frequency combs must have high bandwidth, or they will only be able to detect a small frequency range of chemical compounds, which could lead to false alarms or inaccurate results.
Dispersion is the most important factor that limits a frequency comb’s bandwidth. If there is dispersion, the laser lines are not evenly spaced, which is incompatible with the formation of frequency combs.
“With long wave infrared radiation, the dispersion will be very high. There is no way to get around it, so we have to find a way to compensate for it or counteract it by engineering our system,” Hu says.
Many existing approaches aren’t flexible enough to be used in different scenarios or don’t enable high enough bandwidth.
Hu’s group previously solved this problem in a different type of frequency comb, one that used terahertz waves, by developing a double-chirped mirror (DCM).
A DCM is a special type of optical mirror that has multiple layers with thicknesses that change gradually from one end to the other. They found that this DCM, which has a corrugated structure, could effectively compensate for dispersion when used with a terahertz laser.
“We tried to borrow this trick and apply it to an infrared comb, but we ran into lots of challenges,” Hu says.
Because infrared waves are 10 times shorter than terahertz waves, fabricating the new mirror required an extreme level of precision. At the same time, they needed to coat the entire DCM in a thick layer of gold to remove the heat under laser operation. Plus, their dispersion measurement system, designed for terahertz waves, wouldn’t work with infrared waves, which have frequencies that are about 10 times higher than terahertz.
“After more than two years of trying to implement this scheme, we reached a dead end,” Hu says.
A new solution
Ready to throw in the towel, the team realized something they had missed. They had designed the mirror with corrugation to compensate for the lossy terahertz laser, but infrared radiation sources aren’t as lossy.
This meant they could use a standard DCM design to compensate for dispersion, which is compatible with infrared radiation. However, they still needed to create curved mirror layers to capture the beam of the laser, which made fabrication much more difficult than usual.
“The adjacent layers of mirror differ only by tens of nanometers. That level of precision precludes standard photolithography techniques. On top of that, we still had to etch very deeply into the notoriously stubborn material stacks. Achieving those critical dimensions and etch depths was key to unlocking broadband comb performance,” Zeng says. In addition to precisely fabricating the DCM, they integrated the mirror directly onto the laser, making the device extremely compact. The team also developed a high-resolution, on-chip dispersion measurement platform that doesn’t require bulky external equipment.
“Our approach is flexible. As long as we can use our platform to measure the dispersion, we can design and fabricate a DCM that compensates for it,” Hu adds.
Taken together, the DCM and on-chip measurement platform enabled the team to generate stable infrared laser frequency combs that had far greater bandwidth than can usually be achieved without a DCM.
In the future, the researchers want to extend their approach to other laser platforms that could generate combs with even greater bandwidth and higher power for more demanding applications.
“These researchers developed an ingenious nanophotonic dispersion compensation scheme based on an integrated air–dielectric double-chirped mirror. This approach provides unprecedented control over dispersion, enabling broadband comb formation at room temperature in the long-wave infrared. Their work opens the door to practical, chip-scale frequency combs for applications ranging from chemical sensing to free-space communications,” says Jacob B. Khurgin, a professor at the Johns Hopkins University Whiting School of Engineering, who was not involved with this paper.
This work is funded, in part, by the U.S. Defense Advanced Research Projects Agency (DARPA) and the Gordon and Betty Moore Foundation. This work was carried out, in part, using facilities at MIT.nano.
The comb uses a double-chirped mirror (DCM), pictured, which is a special type of optical mirror that has multiple layers with thicknesses that change gradually from one end to the other.
A research team, led by the Universities of Bristol and Cambridge, demonstrated that the polymer material used to make the artificial heart valve is safe following a six-month test in sheep.
Currently, the 1.5 million patients who need heart valve replacements each year face trade-offs. Mechanical heart valves are durable but require lifelong blood thinners due to a high risk of blood clots, whereas biological valves, made from animal tissue, typically last between eight to 10 years before needing replacement.
The artificial heart valve developed by the researchers is made from SEBS (styrene-block-ethylene/butyleneblock-styrene) – a type of plastic that has excellent durability but does not require blood thinners – and potentially offers the best of both worlds. However, further testing is required before it can be tested in humans.
In their study, published in the European Journal of Cardio-Thoracic Surgery, the researchers tested a prototype SEBS heart valve in a preclinical sheep model that mimicked how these valves might perform in humans.
The animals were monitored over six months to examine potential long-term safety issues associated with the plastic material. At the end of the study, the researchers found no evidence of harmful calcification (mineral buildup) or material deterioration, blood clotting or signs of cell toxicity. Animal health, wellbeing, blood tests and weight were all stable and normal, and the prototype valve functioned well throughout the testing period, with no need for blood thinners.
“More than 35 million patients’ heart valves are permanently damaged by rheumatic fever, and with an ageing population, this figure is predicted to increase four to five times by 2050,” said Professor Raimondo Ascione from the University of Bristol, the study’s clinical lead. “Our findings could mark the beginning of a new era for artificial heart valves: one that may offer safer, more durable and more patient-friendly options for patients of all ages, with fewer compromises.”
“We are pleased that the new plastic material has been shown to be safe after six months of testing in vivo,” said Professor Geoff Moggridge from Cambridge’s Department of Chemical Engineering and Biotechnology, biomaterial lead on the project. “Confirming the safety of the material has been an essential and reassuring step for us, and a green light to progress the new heart valve replacement toward bedside testing.”
The results suggest that artificial heart valves made from SEBS are both durable and do not require the lifelong use of blood thinners.
While the research is still early-stage, the findings help clear a path to future human testing. The next step will be to develop a clinical-grade version of the SEBS polymer heart valve and test it in a larger preclinical trial before seeking approval for a pilot human clinical trial.
The study was funded by a British Heart Foundation (BHF) grant and supported by a National Institute for Health and Care Research (NIHR) Invention for Innovation (i4i) programme Product Development Awards (PDA) award. Geoff Moggridge is a Fellow of King's College, Cambridge.
Adapted from a University of Bristol media release.
An artificial heart valve made from a new type of plastic could be a step closer to use in humans, following a successful long-term safety test in animals.
Created by King's E-Lab, in partnership with Founders at the University of Cambridge, SPARK will act as an entrepreneurial launchpad. This programme will offer hands-on support, world-class mentorship and practical training to enable world-changing ventures covering challenges such as disease prevention and treatment, fertility support and climate resilience. The combined networks of successful entrepreneurs, investor alumni and venture-building expertise brought by King’s E-Lab and Founders at the University of Cambridge will address a critical gap to drive innovation.
More than 180 applications were received for SPARK 1.0, reflecting strong demand for early incubation support. Of the selected companies, focused on AI, machine learning, biotechnology and impact, 42% of the companies are at idea stage, 40% have an early-stage product, and 17% have early users. Around half of the selected companies are led by women.
Ashgold Africa - An edtech business building solar projects to provide sustainable energy in rural Kenya.
Aizen Software - Credit referencing fintech working on financial inclusion.
Atera Analytics - Optimising resources around the EV energy infrastructure ecosystem.
Cambridge Mobilytics - Harnessing data from UK EV charging stations to aid decision-making in the e-mobility sector.
Dielectrix - Building next-gen semiconductor dielectric materials for electronics using 2D materials.
Dulce Cerebrum - Building AI models to detect psychosis from blood tests.
GreenHarvest - Data-driven agritech firm using satellite and climate data to predict changing crop yield migration.
Heartly - Offering affordable, personalised guidance on preventing cardiovascular disease.
Human Experience Dynamics - Combining patient experiences and physiological measures to create holistic insight in psychiatric trials.
iFlame - Agentic AI system to help build creative product action plans.
IntolerSense - Uncovering undiscovered food intolerances using an AI-powered app.
Med Arcade - AI-powered co-pilot to help GPs interact with patient data.
MENRVA - AI-powered matchmaking engine for the art world, connecting galleries, buyers and art businesses.
Myta Bio - leverages biomimetic science to create superior industrial chemicals from natural ingredients.
Egg Advisor - Digital platform offering expert advice to women seeking to freeze their eggs.
Polytecks - Wearable tech firm building e-textiles capable of detecting valvular heart diseases.
RetroAnalytica - Using AI to decarbonise buildings by predicting energy inefficiencies.
SafeTide - Using ‘supramolecular’ technology to keep delicate medicines stable at room temperature for longer periods.
The Surpluss - Climate tech company identifying unused resources in businesses and redistributing them.
Yacson Therapeutics - Using ML to find plant-based therapeutics to help combat inflammatory bowel disease.
Zenithon AI - Using AI and ML to help advance the development of nuclear fusion energy.
The intensive incubator will run for four weeks from the end of August. Each participant will receive specialised support from Founders at the University of Cambridge and King’s E-Lab mentors and entrepreneurs-in-residence to turn their concepts into companies that can attract both investment and ultimately grow into startups capable of driving economic growth.
Following the program, the founders will emerge with:
A validated business model and a clear pathway to product development
Access to expert mentorship and masterclasses with global entrepreneurs and investors
The opportunity to pitch for £20,000 investment and chance to pitch for further investment from established Angel Investors at Demo Day
A chance to join a thriving community of innovators and change-makers
Kamiar Mohaddes, co-founder and Director of King’s Entrepreneurship Lab, said: “Cambridge has been responsible for many world-changing discoveries, but entrepreneurship isn't the first thought of most people studying here. Driving economic growth requires inspiring the next generation to think boldly about how their ideas can shape industries and society. We want SPARK to be a catalyst, showing students the reality of founding a company. We look forward to seeing this cohort turn their ambitions into ventures that contribute meaningfully to the economy.”
Gerard Grech, Managing Director at Founders at the University of Cambridge, said: “Cambridge is aiming to double its tech and science output in the next decade – matching what it achieved in the past 20 years. That ambition starts at the grassroots. The energy from the students, postgraduates and alumni is clear, and with tech contributing £159 billion to the UK economy and 3 million jobs, building transformative businesses is one of the most powerful ways to make an impact. This SPARK 1.0 cohort is beginning that journey, and we’re pleased to partner with King’s Entrepreneurship Lab to support them.”
Gillian Tett, Provost of King’s College, said: “Cambridge colleges have more talent in AI, life sciences and technology, including quantum computing, than ever. Through SPARK, we can support even more students, researchers and alumni to turn their ambition into an investable idea and make the leap from the lab to the marketplace. This isn’t just a game-changer for King’s, but for every college in Cambridge whose students join this programme and journey with us to make an impact from Cambridge, on the world.”
Jim Glasheen, Chief Executive of Cambridge Enterprise, said: “The SPARK 1.0 cohort highlights the breadth and depth of innovation within collegiate Cambridge. SPARK, and the partnership between King’s College and Founders at the University of Cambridge, is a testament to our shared commitment to nurture and empower Cambridge innovators who will tackle global challenges and contribute to economic growth.”
The programme is free for students graduating in Summer 2025, postgraduates, post-docs, researchers, and alumni who have graduated within the last two years. This is made possible through the University of Cambridge, as well as a generous personal donation from Malcolm McKenzie, King’s alumnus and Chair of the E-Lab’s Senior Advisory Board.
King’s Entrepeneurship Lab (King’s E-Lab) and Founders at the University of Cambridge have revealed the 24 startups that will join King’s College’s first-ever incubator programme designed to turn research-backed ideas from University of Cambridge students and alumni into investable companies.
We look forward to seeing this cohort turn their ambitions into ventures that contribute meaningfully to the economy
Kamiar Mohaddes
A mosaic of black and white head images of all those taking part in the SPARK 1.0 incubator
The scanner, funded through a £5.5m investment from the UKRI Medical Research Council (MRC), will form part of the National PET Imaging Platform (NPIP), the UK’s first-of-its-kind national total-body PET imaging platform for drug discovery and clinical research.
Positron emission tomography (PET) is a powerful technology for imaging living tissues and organs down to the molecular level in humans. It can be used to investigate how diseases arise and progress and to detect and diagnose diseases at an early stage.
Total-body PET scanners are more sensitive than existing technology and so can provide new insights into anatomy that have never been seen before, improving detection, diagnosis and treatment of complex, multi-organ diseases.
Current PET technology is less sensitive and requires the patient to be repositioned multiple times to achieve a full-body field of view. Total-body PET scans can achieve this in one session and are quicker, exposing patients to considerably lower doses of radiation. This means more patients, including children, can participate in clinical research and trials to improve our understanding of diseases.
ANGLIA network of universities and NHS trusts
Supplied by Siemens Healthineers, the scanner will also be the focus of the ANGLIA network, comprising three universities, each paired with one or more local NHS trusts: the University of Cambridge and Cambridge University Hospitals NHS Foundation Trust; UCL and University College London Hospitals NHS Foundation Trust; and the University of Sheffield with Sheffield Teaching Hospitals NHS Foundation Trust.
The network, supported by UKRI, is partnered with biotech company Altos Labs and pharmaceutical company AstraZeneca, both with R&D headquarters in Cambridge, and Alliance Medical, a leading provider of diagnostic imaging.
Franklin Aigbirhio, Professor of Molecular Imaging Chemistry at the University of Cambridge, will lead the ANGLIA network. He said: “This is an exciting new technology that will transform our ability to answer important questions about how diseases arise and to search for and develop new treatments that will ultimately benefit not just our patients, but those across the UK and beyond.
“But this is more than just a research tool. It will also help us diagnose and treat diseases at an even earlier stage, particularly in children, for whom repeated investigations using standard PET scanners were not an option.”
The scanner will be located in Addenbrooke’s Hospital, Cambridge, supported by the National Institute for Health and Care Research (NIHR) Cambridge Biomedical Research Centre, ensuring that the discoveries and breakthroughs it enables can be turned rapidly into benefits to patients. It will expand NHS access to PET services, particularly in underserved areas across the East of England, and support more inclusive trial participation.
Patrick Maxwell, Regius Professor of Physic and Head of the School of Clinical Medicine at the University of Cambridge, said: “The ANGLIA network, centred on the Cambridge Biomedical Campus and with collaborations across the wider University and its partners, will drive innovations in many areas of this key imaging technology, such as new radiopharmaceuticals and application of AI to data analysis, that will bring benefits to patients far beyond its immediate reach. Its expertise will help build the next generation of PET scientists, as well as enabling partners in industry to use PET to speed up the development of new drugs.”
Roland Sinker, Chief Executive of Cambridge University Hospitals NHS Foundation Trust, which runs Addenbrooke’s Hospital, said: “I am pleased that our patients will be some of the first to benefit from this groundbreaking technology. Harnessing the latest technologies and enabling more people to benefit from the latest research is a vital part of our work at CUH and is crucial to the future of the NHS.
“By locating this scanner at Addenbrooke’s we are ensuring that it can be uniquely used to deliver wide ranging scientific advances across academia and industry, as well as improving the lives of patients.”
It is anticipated that the scanner will be installed by autumn 2026.
The co-location of the total-body PET scanner with existing facilities and integration with systems at the University of Cambridge and Addenbrooke’s Hospital will also enhance training and research capacity, particularly for early-career researchers and underrepresented groups.
The ANGLIA network will provide opportunities to support and train more by people from Black and other minority ethnic backgrounds to participate in PET chemistry and imaging. The University of Cambridge will support a dedicated fellowship scheme, capacity and capability training in key areas, and strengthen the network partnership with joint projects and exchange visits.
Professor Aigbirhio, who is also co-chair of the UKRI MRC’s Black in Biomedical Research Advisory Group, added: “Traditionally, scientists from Black and other minority ethnic backgrounds are under-represented in the field of medical imaging. We aim to use our network to change this, providing fellowship opportunities and training targeted at members of these communities.”
The National PET Imaging Platform
Funded by UKRI’s Infrastructure Fund, and delivered by a partnership between Medicines Discovery Catapult, MRC and Innovate UK, NPIP provides a critical clinical infrastructure of scanners, creating a nationwide network for data sharing, discovery and innovation. It allows clinicians, industry and researchers to collaborate on an international scale to accelerate patient diagnosis, treatment and clinical trials. The MRC funding for the Cambridge scanner will support the existing UKRI Infrastructure Fund investment for NPIP and enables the University to establish a total-body PET facility.
Dr Ceri Williams, Executive Director of Challenge-Led Themes at MRC said: “MRC is delighted to augment the funding for NPIP to provide an additional scanner for Cambridge in line with the original recommendations of the funding panel. This additional machine will broaden the geographic reach of the platform, providing better access for patients from East Anglia and the Midlands, and enable research to drive innovation in imaging, detection, and diagnosis, alongside supporting partnership with industry to drive improvements and efficiency for the NHS.”
Dr Juliana Maynard, Director of Operations and Engagement for the National PET Imaging Platform, said: “We are delighted to welcome the University of Cambridge as the latest partner of NPIP, expanding our game-changing national imaging infrastructure to benefit even more researchers, clinicians, industry partners and, importantly, patients.
“Once operational, the scanner will contribute to NPIP’s connected network of data, which will improve diagnosis and aid researchers’ understanding of diseases, unlocking more opportunities for drug discovery and development. By fostering collaboration on this scale, NPIP helps accelerate disease diagnosis, treatment, and clinical trials, ultimately leading to improved outcomes for patients."
A new total-body PET scanner to be hosted in Cambridge – one of only a handful in the country – will transform our ability to diagnose and treat a range of conditions in patients and to carry out cutting-edge research and drug development.
This is an exciting new technology that will transform our ability to answer important questions about how diseases arise and to search for and develop new treatments
Demonstrating its strong commitment to decarbonising the built environment through a portfolio of 64 Green Mark-certified buildings, NUS was awarded the Green Mark Commemorative Certificate at the Singapore Green Building Council Gala Dinner on 11 July 2025 for its continued support and contributions to Singapore’s green building journey. The event was held in celebration of the 20th anniversary of the Building and Construction Authority (BCA) Green Mark Certification Scheme.
Leading the way in high-performance green buildings
As of July 2025, six buildings in NUS have achieved the highest energy performance ratings under the Green Mark 2021 In-Operation scheme - Platinum Super Low Energy (SLE), Zero Energy (ZE), and Positive Energy (PE):
In 2021, NUS bagged the top BCA award for green buildings – the Green Mark Platinum Champion Award – for achieving 50 Green Mark Gold and above certifications for developments across its campuses.
One example is Ventus at NUS’ Kent Ridge Campus, where the Office of University Campus Infrastructure (UCI) is located. Through green building design and collaboration with building users to implement energy-saving measures, the Energy Use Intensity (EUI), which measures the total energy consumption of a building relative to its gross floor area, for Ventus was 49 kWh/m2 in FY2024, which is significantly lower than the average EUI for offices at 219 kWh/m2.
Beyond achieving high energy performance for the University’s new and existing buildings, green buildings also offer health and well-being benefits to building occupants. Comprehensive research by BCA and NUS in 2017 revealed that occupants experienced significantly better indoor environmental conditions than those in non-certified buildings, with measurable improvements in temperature control, humidity levels, air quality, and overall occupant satisfaction.
Translating innovation into practice
Beyond certification, NUS has played a pivotal role in pushing for an alternative cooling approach to reduce energy consumption while maintaining thermal comfort. Under Singapore’s new proposed Built Environment Decarbonisation Technology Roadmap, Associate Professor Adrian Chong from the Department of the Built Environment at the College of Design and Engineering, together with the UCI sustainability team, contributed to the development of a new technical standard known as TR 141:2025 for hybrid cooling systems, which aims to accelerate the deployment of hybrid cooling systems across building typologies. Their inputs were informed by the design and operational experience gained from implementing hybrid cooling technology in Net Zero Energy buildings at NUS. The new technical standard addresses the current gap in international standards and provides guidelines suited to our tropical climate.
Under the Campus Sustainability Roadmap 2030 Framework, the University aims to reduce Scope 1 and Scope 2 emissions by 30 per cent and reduce EUI by 20 per cent from the FY2019 baseline. By setting ambitious energy efficiency targets for its new and existing buildings, NUS aims to use the campus as a living lab where students can engage with real-world case studies for experiential learning and researchers can test-bed and develop innovations to improve building performance.
Supporting Singapore’s green building journey and Net Zero vision
NUS is committed to reducing both operational and embodied carbon across its building portfolio, in alignment with the national green building agenda and broader climate goals. As part of its strategy to address Scope 3 emissions, NUS has commissioned a study on sustainable construction, specifically on low embodied carbon materials, and is currently exploring the procurement of such materials for future NUS developments.
Vice President (Campus Infrastructure) Mr Koh Yan Leng said, “Greening the university’s buildings, such as by providing ample greenery, natural daylight, and more natural ventilation, has created a conducive environment for learning and working. Our commitment to bold energy targets helps shape the behaviours of future leaders and enables our community to engage with sustainable practices every day.”
Days before Singapore celebrated its 60th year of independence, a new book was added to the canon of literature on the city-state’s improbable growth and survival – this time, through the eyes of public sector leaders who had a hand in its transformation.
Launched on 6 August 2025, How Singapore Beat the Odds: Insider Insights on Governance in the City-State by Terence Ho, Adjunct Associate Professor (Practice) at the Lee Kuan Yew School of Public Policy (LKYSPP), features in-depth interviews with 12 political and public sector leaders on the challenges faced, chances taken and decisions made that shaped Singapore into what it is today, as well as their perspectives on the journey and views on Singapore’s future.
Assoc Prof Ho is a frequent commentator on economic, fiscal, workforce and social policy issues, as well as skills and lifelong learning. As an adjunct faculty member of LKYSPP, he teaches in executive programmes covering various areas of public sector governance.
The foreword was written by Mr Tharman Shanmugaratnam, President of the Republic of Singapore, and the distinguished interviewees include former Singapore President Mdm Halimah Yacob; Ms Seah Jiak Choo, former Director-General of Education; Mr Peter Ho, former Chairman of the Urban Redevelopment Authority (URA); and Mr Ravi Menon, former Managing Director of the Monetary Authority of Singapore.
The project was mooted and sponsored by Mr Narayana Murthy, founder and Chairman Emeritus of technology provider Infosys and a long-time admirer of Singapore. He believed that such a book of insights into Singapore’s public governance would inspire and guide leaders of developing nations to drive similar transformations of their own.
“The success of Singapore lies not in any magic, but in a successful method,” said Mr Murthy at the launch event. “Singapore demonstrates that good public governance is a possible, practical, valuable and powerful strategy. To Singapore and to the visionaries who made it what it is today, I say ‘Thank you’ for lifting the standard, showing the way and reminding the world that public governance done right can transform the destiny of a nation.”
Noting the unplanned coincidence of the book’s launch with the nation’s diamond jubilee, Assoc Prof Ho highlighted that the themes explored in the book were not just critical in Singapore’s early development but remain relevant for its next 60 years. “While the book aims to inspire other countries, I hope that Singaporeans too will draw inspiration from the values and the pioneering spirit of our veteran leaders as we collectively continue to write the next chapter of the Singapore story.”
The book launch at the Capitol Kempinski Hotel Singapore was attended by President Tharman as Guest of Honour and more than 120 guests, including several of the book’s interviewees. During the event, three leaders featured in the book delivered speeches on the aspects of governance where they made significant contributions: Professor Lim Siong Guan, former Head of Civil Service, on fiscal management; Mr Khaw Boon Wan, former Singapore Cabinet Minister, on healthcare; and Professor Cheong Koon Hean, former CEO of URA and the Housing Development Board, on urban planning.
Priorities to not just survive, but thrive
The idea that no one owes Singapore its survival was a driving force for many early governance decisions, said Prof Lim, who is also Distinguished Practitioner Fellow at LKYSPP and was recently honoured as an Emeritus Professor by NUS for his noteworthy contributions to Singapore’s civil service.
In his speech, Prof Lim shared the government’s top spending priorities for nation-building: defence, industrialisation, home ownership, education and health. While principles of fiscal prudence were established from the start, the government recognised the need to enhance its revenue streams and created the sovereign wealth fund GIC and the reserves policy to generate investment returns and allocate them between current and future needs.
Prof Lim described Singapore’s approach thus: “If you want to spend more, you need to be able to earn more at the same time. Be fair; give access to the current generation to a reasonable or fair part of the investment returns, while at the same time making sure that you keep up the capital reserves for whatever may be the unexpected demands of the rainy days.”
Solving the issue of sustainable financing for the healthcare system was Mr Khaw’s task as Health Minister from 2004 to 2011. Balancing the two dimensions of healthcare, financing and delivery, is especially challenging because “in healthcare, demand is potentially unlimited, but supply is not,” and poor patients suffer the most when demand outstrips supply, said the current chairman of SPH Media Trust.
The solution lay in a mixed approach for both dimensions. Instead of relying on taxation or insurance for funding, Singapore employs a multi-layered model that includes patient co-payment to prevent overconsumption while keeping healthcare affordable. Similarly, healthcare services are delivered by a mix of public, private and charitable providers, which offers patients more choices and responds better to the diverse needs of an ageing population.
As Singapore evolves into a “super aged” society, the healthcare system must focus more on long-term chronic care, building on the pillars of preventative healthcare, primary healthcare by family physicians and a healthcare system that is integrated with the community to provide holistic care, said Mr Khaw.
He spotlighted the crucial role of caregivers, saying: “When a cure is no longer available, what patients value most is care, and the highest form of care is underpinned by love and compassion, often by the patient’s family. Society has a strong interest, indeed a strong duty, to support caregivers adequately.”
A combination of aspirations, innovation, and good urban governance enabled Singapore to turn the constraints that threatened its survival into catalysts for success, said Prof Cheong, Practice Professor at Singapore University of Technology and Design and Chairman of the Lee Kuan Yew Centre for Innovative Cities.
For example, water insecurity drove investments in water recycling and water catchments, and the scarcity of land forced planners to reclaim land, create underground spaces and plan vertically.
“The starting point is to embed a culture of foresight in the way our policies are formulated. The city did not wait for problems to become crises,” she said, adding that a whole-of-government approach and collaboration with the private sector, non-profits and citizens were key in creating solutions that balance economic competitiveness and quality of life.
Considering principles in context
The event concluded with a brief panel discussion moderated by Ms Ong Toon Hui, Vice Dean and Executive Director, Institute for Governance and Leadership at LKYSPP, in which the three speakers shared their thoughts on how the book’s insights into Singapore’s experience could be made relevant to other countries.
Prof Lim and Mr Khaw emphasised the importance of understanding the principles behind the decisions and translating those into actions that fit the context of each country, and Prof Cheong provided additional insights from her experience researching cities for 20 years and working on the Lee Kuan Yew World City Prize.
She cited long-term planning, good urban governance, strong leadership, institutionalised processes, and competent people as common traits of the award-winning cities, some of which have been integrated into the prize assessment criteria.
“The cities that have won the prize all exhibit very similar principles to us – maybe not so much in the details – but these are very high-level principles that I feel are applicable to most cities,” she said, reminding the audience that even as others look to Singapore for lessons in governance, there is much that Singapore can learn from other cities in return.
Funding cut disrupts Botswana-based effort to help patients control illness without regular treatments
Liz Mineo
Harvard Staff Writer
4 min read
Roger Shapiro.
Niles Singer/Harvard Staff Photographer
For more than 20 years, Harvard infectious disease specialist Roger Shapiro has fought HIV on the ground in Botswana, where the rate of infection exceeded 30 percent in some areas of the country in the 1990s.
Progress has been steady since then. According to the World Bank, Botswana still has one of the world’s highest rates of infection — over 20 percent of the adult population — but far fewer HIV deaths. The main lifesaver has been antiretroviral treatment (ART).
Shapiro began working in Botswana in 1999 under the mentorship of pioneering AIDS researcher Max Essex, who helped launch the Botswana Harvard Health Partnership (BHP). He has run dozens of studies on HIV/AIDS in Botswana and has become an expert in how HIV affects maternal and child health.
In 2008, pioneering AIDS researcher Professor Max Essex spoke to a group gathered at his lab in Gaborone, Botswana.
Harvard file photos
On the grounds of Princess Marina Hospital in Gaborone, a plaque recognizes the partnership between Harvard School of Public Health and the Botswana Ministry of Health.
Among Shapiro’s current studies is a trial with the potential to help some children control HIV without the need for regular treatment. Efforts to create a vaccine have so far failed, but there are exciting new developments with products known as broadly neutralizing antibodies, or bNAbs, he says.
The trial aims to find a new treatment option by examining the effects of a combination of three broadly neutralizing HIV antibodies. It builds upon previous studies suggesting that bNAbs might help the immune system clear the virus better than standard ART, and may offer a promising avenue for getting to post-treatment viral control, Shapiro says.
“It is the only study in pediatrics looking at three antibodies as combination treatment for HIV and ultimately as a path toward HIV cure,” he said. “It’s really exciting science, since we are testing whether some children can go off all treatment and control HIV on their own.”
“Botswana probably has the best program to prevent HIV transmission to children on the continent.”
Roger Shapiro
In May, the five-year grant supporting the study was slashed as part of the Trump administration’s mass cancellation of Harvard research funds. Four other grants for Botswana-based projects led by Shapiro were also canceled. The cuts have not only dealt a serious blow to the participants in the trial and their families, said Shapiro, but imperiled progress toward a cure for pediatric HIV.
“This was one of the largest funded studies to begin making inroads in this field,” he said. “Now all this science is up in the air.”
Funded by National Institutes of Health and the National Institute of Allergy and Infectious Diseases, the trial is following 12 children, ages 2-9 years, who are living with HIV. The study is in its second year, and researchers have been gearing up to have the children pause standard ART and start using antibodies alone as treatment.
The team had planned to scale up to 41 children, but due to the cuts, they are now aiming for 30. They were able to secure donations to continue with the project until March, but it’s unclear what will happen after that.
According to the Centers for Disease Control, Botswana is a leader in global HIV efforts, having exceeded the UNAIDS 95-95-95 targets: “95 percent of people living with HIV in Botswana know their status, 98 percent of people who know their status receive treatment, and 98 percent of people on treatment are virally suppressed.”
“Botswana probably has the best program to prevent HIV transmission to children on the continent,” said Shapiro. “Now less than half a percent of the children become infected because most women access free drug treatment during pregnancy, which effectively turns off transmission. It’s a tiny percentage, but it still leads to more pediatric HIV infections than we see in the United States.”
Giving treatment to children infected with HIV every day for the rest of their lives is a daunting prospect for many families, said Shapiro. Parents were excited about the possibility of their children being liberated from regular infusions of antibodies.
The grant’s termination is yet another blow to Botswana’s fight against HIV/AIDS. In February, assistance through three U.S. programs — USAID, the U.S. President’s Emergency Plan for AIDS Relief (PEPFAR), and the Centers for Disease Control and Prevention — was cut. Botswana’s government pays for medication, but it relied on those funds to provide services around HIV, said Shapiro.
“HIV/AIDS is essentially a chronic problem in Botswana, and a chronic problem needs ongoing treatment,” he said. “If treatment lapses … We worry about HIV transmission going back up again, not only in Botswana but throughout all of Africa.”
Why was Pacific Northwest home to so many serial killers?
In ‘Murderland,’ alum explores lead-crime theory through lens of her own memories growing up there
Jacob Sweet
Harvard Staff Writer
5 min read
In Caroline Fraser’s 2025 book “Murderland,” the air is always thick with smog, and sinister beings lie around every corner.
Fraser, Ph.D. ’87, in her first book since “Prairie Fires,” her Pulitzer Prize-winning biography of “Little House on the Prairie” author Laura Ingalls Wilder, explores the proliferation of serial killers in the 1970s — weaving together ecological and social history, memoir, and disturbing scenes of predation and violence. The resulting narrative shifts the conventional focus on the psychology of serial killers to the environment around them. As the Pacific Northwest reels from a slew of serial murderers, Fraser turns toward the nearby smelters that shoot plumes of lead, arsenic, and cadmium into the air and the companies, government officials, and even citizens who are happy to overlook the pollution.
Of the Pacific Northwest’s most notorious killers, Fraser ties many to these smokestacks. Ted Bundy, whose crimes and background are discussed more than any other character, grew up in the shadows of the ASARCO copper smelter in Tacoma, Washington. Gary Ridgway grew up in Tacoma, too, and Charles Manson spent 10 years at a nearby prison, where lead has seeped into the soil. Richard Ramirez, known as the Night Stalker, grew up next to a different ASARCO smokestack in El Paso, Texas, long before committing murders in Los Angeles.
Fraser’s own experiences growing up in Mercer Island, Washington, add another eerie dimension. A classmate’s father blows up his home with the family inside. Another classmate becomes a serial killer. Her Christian Scientist father is menacing and abusive, and Fraser, as a child, considers ways to get rid of him, possibly by pushing him off a boat. The darkness is unrelenting; something is in the air.
To what extent environmental degradation directly led to the killings described in the book, Fraser leaves up to readers. “There are many things that probably contribute to somebody who commits these kinds of crimes,” she said in an interview. “I did not conceive of it as a work of criminology or an academic treatise on the lead-crime hypothesis. I really just wanted to tell a history about the history of the area — what I remember of it — and create a narrative that took all these things into account.”
“I did not conceive of it as a work of criminology or an academic treatise on the lead-crime hypothesis. I really just wanted to tell a history about the history of the area — what I remember of it — and create a narrative that took all these things into account.”
Fraser has been thinking about these ideas for decades. Before “Prairie Fires” was published, she had already written some of the memoir portions of the book, recalling the crimes and unusual occurrences near her family’s home. She was long interested in why there were so many serial killers in the Pacific Northwest and whether the answer was simply happenstance.
Though she had some knowledge of the pollution in Tacoma as a kid — the area’s smell was referred to as the “Aroma of Tacoma” due to sulfur emissions from a local factory — it wasn’t until decades later that she learned the full scope of industrial production and pollution.
Some revelations came by chance. When looking at one property on Vashon Island, across the Puget Sound from West Seattle, she came across a listing with the ominous warning — “arsenic remediation needed.”
“That just leapt out at me,” she said. “How can there be arsenic on Vashon Island?” After more research, she discovered that arsenic had come from the ASARCO smelter, on the south end of the same body of water. The damage reached much farther; the Washington State Department of Ecology says that air pollutants — mostly arsenic and lead — from the smelter settled on the surface soil of more than 1,000 square miles of the Puget Sound Basin.
“Much of Tacoma, with a population approaching 150,000, will record high lead levels in neighborhood soils,” Fraser wrote in the book, “but the Bundy family lives near a string of astonishingly high measurements of 280, 340, and 620 parts per million.”
The connection made Fraser focus more on the physical environment in which these serial killers lived and less on other factors — like a history of abuse — on which true-crime writers have historically placed greater emphasis.
In this ecological pursuit, Fraser points readers toward once-ubiquitous sources of pollution like leaded gas and the industry forces that popularized them against advice from public-health experts.
American physicians raise concerns that lead particulates will blanket the nation’s roads and highways, poisoning neighborhoods slowly and “insidiously.” They call it “the greatest single question in the field of public health that has ever faced the American public.” Their concerns are swept aside, however, and Frank Howard, a vice president of the Ethyl Corporation, a joint venture between General Motors and Standard Oil, calls leaded gasoline a “gift of God.”
Though Fraser doesn’t explicitly support the lead-crime hypothesis, the core of the idea — that greater exposure to lead results in higher rates of crime — remains central. In the book’s final chapter, Fraser cites the work of economist Jessica Wolpaw Reyes, Ph.D. ’02, who concluded in her dissertation that lead exposure correlates with higher adult crime rates.
Regardless of exactly how much this hypothesis can be assuredly proven, Fraser thinks the connections between unapologetic and unfettered pollution and violent crime warrant scrutiny. In “Murderland,” she gives the idea, and an era of crime, a nimble, haunting narrative.
Princeton grad alum Cara Brook is also among the 22 early-career researchers named as new Pew Scholars, funded for four years "to uncover fundamental insights about human health and disease."
Using machine learning, MIT chemical engineers have created a computational model that can predict how well any given molecule will dissolve in an organic solvent — a key step in the synthesis of nearly any pharmaceutical. This type of prediction could make it much easier to develop new ways to produce drugs and other useful molecules.
The new model, which predicts how much of a solute will dissolve in a particular solvent, should help chemists to choose the right solvent for any given reaction in their synthesis, the researchers say. Common organic solvents include ethanol and acetone, and there are hundreds of others that can also be used in chemical reactions.
“Predicting solubility really is a rate-limiting step in synthetic planning and manufacturing of chemicals, especially drugs, so there’s been a longstanding interest in being able to make better predictions of solubility,” says Lucas Attia, an MIT graduate student and one of the lead authors of the new study.
The researchers have made their model freely available, and many companies and labs have already started using it. The model could be particularly useful for identifying solvents that are less hazardous than some of the most commonly used industrial solvents, the researchers say.
“There are some solvents which are known to dissolve most things. They’re really useful, but they’re damaging to the environment, and they’re damaging to people, so many companies require that you have to minimize the amount of those solvents that you use,” says Jackson Burns, an MIT graduate student who is also a lead author of the paper. “Our model is extremely useful in being able to identify the next-best solvent, which is hopefully much less damaging to the environment.”
William Green, the Hoyt Hottel Professor of Chemical Engineering and director of the MIT Energy Initiative, is the senior author of the study, which appears today in Nature Communications. Patrick Doyle, the Robert T. Haslam Professor of Chemical Engineering, is also an author of the paper.
Solving solubility
The new model grew out of a project that Attia and Burns worked on together in an MIT course on applying machine learning to chemical engineering problems. Traditionally, chemists have predicted solubility with a tool known as the Abraham Solvation Model, which can be used to estimate a molecule’s overall solubility by adding up the contributions of chemical structures within the molecule. While these predictions are useful, their accuracy is limited.
In the past few years, researchers have begun using machine learning to try to make more accurate solubility predictions. Before Burns and Attia began working on their new model, the state-of-the-art model for predicting solubility was a model developed in Green’s lab in 2022.
That model, known as SolProp, works by predicting a set of related properties and combining them, using thermodynamics, to ultimately predict the solubility. However, the model has difficulty predicting solubility for solutes that it hasn’t seen before.
“For drug and chemical discovery pipelines where you’re developing a new molecule, you want to be able to predict ahead of time what its solubility looks like,” Attia says.
Part of the reason that existing solubility models haven’t worked well is because there wasn’t a comprehensive dataset to train them on. However, in 2023 a new dataset called BigSolDB was released, which compiled data from nearly 800 published papers, including information on solubility for about 800 molecules dissolved about more than 100 organic solvents that are commonly used in synthetic chemistry.
Attia and Burns decided to try training two different types of models on this data. Both of these models represent the chemical structures of molecules using numerical representations known as embeddings, which incorporate information such as the number of atoms in a molecule and which atoms are bound to which other atoms. Models can then use these representations to predict a variety of chemical properties.
One of the models used in this study, known as FastProp and developed by Burns and others in Green’s lab, incorporates “static embeddings.” This means that the model already knows the embedding for each molecule before it starts doing any kind of analysis.
The other model, ChemProp, learns an embedding for each molecule during the training, at the same time that it learns to associate the features of the embedding with a trait such as solubility. This model, developed across multiple MIT labs, has already been used for tasks such as antibiotic discovery, lipid nanoparticle design, and predicting chemical reaction rates.
The researchers trained both types of models on over 40,000 data points from BigSolDB, including information on the effects of temperature, which plays a significant role in solubility. Then, they tested the models on about 1,000 solutes that had been withheld from the training data. They found that the models’ predictions were two to three times more accurate than those of SolProp, the previous best model, and the new models were especially accurate at predicting variations in solubility due to temperature.
“Being able to accurately reproduce those small variations in solubility due to temperature, even when the overarching experimental noise is very large, was a really positive sign that the network had correctly learned an underlying solubility prediction function,” Burns says.
Accurate predictions
The researchers had expected that the model based on ChemProp, which is able to learn new representations as it goes along, would be able to make more accurate predictions. However, to their surprise, they found that the two models performed essentially the same. That suggests that the main limitation on their performance is the quality of the data, and that the models are performing as well as theoretically possible based on the data that they’re using, the researchers say.
“ChemProp should always outperform any static embedding when you have sufficient data,” Burns says. “We were blown away to see that the static and learned embeddings were statistically indistinguishable in performance across all the different subsets, which indicates to us that that the data limitations that are present in this space dominated the model performance.”
The models could become more accurate, the researchers say, if better training and testing data were available — ideally, data obtained by one person or a group of people all trained to perform the experiments the same way.
“One of the big limitations of using these kinds of compiled datasets is that different labs use different methods and experimental conditions when they perform solubility tests. That contributes to this variability between different datasets,” Attia says.
Because the model based on FastProp makes its predictions faster and has code that is easier for other users to adapt, the researchers decided to make that one, known as FastSolv, available to the public. Multiple pharmaceutical companies have already begun using it.
“There are applications throughout the drug discovery pipeline,” Burns says. “We’re also excited to see, outside of formulation and drug discovery, where people may use this model.”
The research was funded, in part, by the U.S. Department of Energy.
Science no longer enjoys unlimited and universal trust. An array of groups are questioning scientific wisdom. What does this mean for students and researchers? Gabriel Dorthe studies how trust and mistrust emerge through mutual interaction between scientific and research-sceptical thinking.
Singapore is seen as an ideal country that could use nuclear energy, due to its technological know-how, institutional maturity, and geographical constraints on renewable power generation.
This view came from Mr Rafael Mariano Grossi, Director General of the International Atomic Energy Agency (IAEA), who delivered a lecture on developments in atomic energy and nuclear security at NUS on 25 July 2025. The lecture, hosted by the Singapore Nuclear Research and Safety Institute (SNRSI) at NUS, was followed by a question-and-answer session moderated by Associate Professor Leong Ching, NUS Vice Provost (Student Life) and Acting Dean of the Lee Kuan Yew School of Public Policy at NUS.
“When it comes to decarbonising, what are your options? Here, there is no hydropower. You have renewables, but you don’t have much territory... so you cannot have wind parks for kilometres on end,” he told an audience of 250 students, undergraduates, academics, experts and government officials at NUS’ Shaw Foundation Alumni House.
“In my opinion, and in the opinion of many experts, in terms of the options, perhaps Singapore could rightly figure as the most perfect example of a country that needs nuclear energy. With a very small nuclear power plant, you can have a level of energy density and production that you cannot match with anything else.”
Singapore has been contributing to global developments in nuclear research for the last decade and has been a member of IAEA since 1967.
For example, Singapore conducts training on nuclear science and technology for other countries, and its government agencies contribute to technical committees of the IAEA — which functions as the United Nations nuclear watchdog.
While the government announced in February that Singapore would study the potential deployment of nuclear power and systematically build up capabilities in the area, no decision has been made on whether the country will adopt nuclear energy.
Meanwhile, the country is further building its nuclear expertise with SNRSI, which was launched on 11 July 2025.
“Having Mr Grossi here today is of special significance to us because, earlier this month, the Singapore Nuclear Research and Safety Institute was officially launched after 10 years as an Initiative. It didn’t happen overnight, it took us 10 years to grow our size and capability,” said Professor Lui Pao Chuen, Chairman of the SNRSI Management Board, in his opening remarks.
“Singapore is committed and ready to contribute to the safe, peaceful use of nuclear science and technology,” he added.
SNRSI will operate from a new purpose-built facility at NUS, with a S$66 million grant. It plans to double its pool of experts to 100 by 2030, and will play a major role in Singapore’s partnership with the IAEA to train experts from developing countries in nuclear research.
Singapore has also worked with the IAEA on other areas, such as the designation of the NUS Centre for Ion Beam Applications as a Collaborating Centre — the first such centre in Singapore. One focus of the Collaborating Centre is proton beam therapy, which is used in radiation cancer treatment.
What’s driving the nuclear pivot
The pivot towards atomic power has been a global trend, with ASEAN countries also showing interest in collaborating with IAEA to develop nuclear energy capabilities. For instance, countries like Indonesia, Vietnam and Myanmar plan to build nuclear power plants.
“It is not a trend towards nuclear dominance,” noted Mr Grossi, emphasising that the role of nuclear energy is to provide a stable base load for the grid that “never stops”.
Instead, he attributed the rising interest to two factors: decarbonisation and energy security.
If the world aspired to hit decarbonisation targets of the Paris Agreement, it would have to include the use of nuclear energy. “Decarbonisation without nuclear (energy) is practically utopian,” he added.
A more fragmented world also means growing energy security concerns, as an overreliance on external energy supplies could put countries in a vulnerable position.
“The allure of a source of energy which gives you total independence becomes even clearer when you have a nuclear power plant. You switch it on, it’s yours,” said Mr Grossi.
If Singapore is keen to pursue nuclear energy, he suggested that the Republic could share a nuclear power plant with an ASEAN neighbour, citing how this is done in Slovenia with the Krsko power plant that provides energy to both Slovenia and Croatia.
Safety is top priority
While there are benefits to nuclear energy, the topic of safety, among other areas such as application guidelines and costs, was brought up and addressed by Mr Grossi during the question-and-answer session.
In response to a question on the effects of a nuclear power plant incident in a densely populated city such as Singapore, Mr Grossi assured that safety is IAEA’s top priority. “We at the IAEA develop, together with the countries, emergency preparedness and response mechanisms,” he said. “It’s one indispensable part of nuclear power planning and operation.”
To another question on disposing radioactive waste, Mr Grossi said that no country can run a nuclear power programme without first having a clearly defined and agreed plan for managing the waste – a process he emphasised that must be considered from the start and not left until after the fuel is used. “There are very clear methodologies to deal with it, and they are used…to great success.”
While there have been debates concerning the long-term disposal due to the lasting radiation of spent fuel, Mr Grossi noted that the amount of such waste is extremely low. The waste generated is also inspected to ensure there is no radiation hazard or misuse of nuclear material. “We are the only industry that checks the rubbish,” he added with a smile.
Summing up the potential of nuclear power, he concluded, “Let me say that all of this presents a picture of opportunities, challenges and problems.” It is a blueprint that the IAEA, in partnership with countries like Singapore, will continue to fine-tune and optimise.
Within the past few years, models that can predict the structure or function of proteins have been widely used for a variety of biological applications, such as identifying drug targets and designing new therapeutic antibodies.
These models, which are based on large language models (LLMs), can make very accurate predictions of a protein’s suitability for a given application. However, there’s no way to determine how these models make their predictions or which protein features play the most important role in those decisions.
In a new study, MIT researchers have used a novel technique to open up that “black box” and allow them to determine what features a protein language model takes into account when making predictions. Understanding what is happening inside that black box could help researchers to choose better models for a particular task, helping to streamline the process of identifying new drugs or vaccine targets.
“Our work has broad implications for enhanced explainability in downstream tasks that rely on these representations,” says Bonnie Berger, the Simons Professor of Mathematics, head of the Computation and Biology group in MIT’s Computer Science and Artificial Intelligence Laboratory, and the senior author of the study. “Additionally, identifying features that protein language models track has the potential to reveal novel biological insights from these representations.”
Onkar Gujral, an MIT graduate student, is the lead author of the open-access study, which appears this week in the Proceedings of the National Academy of Sciences. Mihir Bafna, an MIT graduate student in electrical engineering and computer science, and Eric Alm, an MIT professor of biological engineering, are also authors of the paper.
Opening the black box
In 2018, Berger and former MIT graduate student Tristan Bepler PhD ’20 introduced the first protein language model. Their model, like subsequent protein models that accelerated the development of AlphaFold, such as ESM2 and OmegaFold, was based on LLMs. These models, which include ChatGPT, can analyze huge amounts of text and figure out which words are most likely to appear together.
Protein language models use a similar approach, but instead of analyzing words, they analyze amino acid sequences. Researchers have used these models to predict the structure and function of proteins, and for applications such as identifying proteins that might bind to particular drugs.
In a 2021 study, Berger and colleagues used a protein language model to predict which sections of viral surface proteins are less likely to mutate in a way that enables viral escape. This allowed them to identify possible targets for vaccines against influenza, HIV, and SARS-CoV-2.
However, in all of these studies, it has been impossible to know how the models were making their predictions.
“We would get out some prediction at the end, but we had absolutely no idea what was happening in the individual components of this black box,” Berger says.
In the new study, the researchers wanted to dig into how protein language models make their predictions. Just like LLMs, protein language models encode information as representations that consist of a pattern of activation of different “nodes” within a neural network. These nodes are analogous to the networks of neurons that store memories and other information within the brain.
The inner workings of LLMs are not easy to interpret, but within the past couple of years, researchers have begun using a type of algorithm known as a sparse autoencoder to help shed some light on how those models make their predictions. The new study from Berger’s lab is the first to use this algorithm on protein language models.
Sparse autoencoders work by adjusting how a protein is represented within a neural network. Typically, a given protein will be represented by a pattern of activation of a constrained number of neurons, for example, 480. A sparse autoencoder will expand that representation into a much larger number of nodes, say 20,000.
When information about a protein is encoded by only 480 neurons, each node lights up for multiple features, making it very difficult to know what features each node is encoding. However, when the neural network is expanded to 20,000 nodes, this extra space along with a sparsity constraint gives the information room to “spread out.” Now, a feature of the protein that was previously encoded by multiple nodes can occupy a single node.
“In a sparse representation, the neurons lighting up are doing so in a more meaningful manner,” Gujral says. “Before the sparse representations are created, the networks pack information so tightly together that it's hard to interpret the neurons.”
Interpretable models
Once the researchers obtained sparse representations of many proteins, they used an AI assistant called Claude (related to the popular Anthropic chatbot of the same name), to analyze the representations. In this case, they asked Claude to compare the sparse representations with the known features of each protein, such as molecular function, protein family, or location within a cell.
By analyzing thousands of representations, Claude can determine which nodes correspond to specific protein features, then describe them in plain English. For example, the algorithm might say, “This neuron appears to be detecting proteins involved in transmembrane transport of ions or amino acids, particularly those located in the plasma membrane.”
This process makes the nodes far more “interpretable,” meaning the researchers can tell what each node is encoding. They found that the features most likely to be encoded by these nodes were protein family and certain functions, including several different metabolic and biosynthetic processes.
“When you train a sparse autoencoder, you aren’t training it to be interpretable, but it turns out that by incentivizing the representation to be really sparse, that ends up resulting in interpretability,” Gujral says.
Understanding what features a particular protein model is encoding could help researchers choose the right model for a particular task, or tweak the type of input they give the model, to generate the best results. Additionally, analyzing the features that a model encodes could one day help biologists to learn more about the proteins that they are studying.
“At some point when the models get a lot more powerful, you could learn more biology than you already know, from opening up the models,” Gujral says.
The research was funded by the National Institutes of Health.
Understanding what is happening inside the “black box” of large protein models could help researchers to choose better models for a particular task, helping to streamline the process of identifying new drugs or vaccine targets.
Harvard clinic uses mindfulness techniques to treat medically induced PTSD
Heart attacks are life-changing events, but one type can be particularly distressing.
Spontaneous coronary artery dissection primarily strikes women under 50. Often, they are physically fit nonsmokers with good cholesterol and normal blood pressure — in other words, the very people who least expect a cardiac emergency. The shock of such an event may help explain why as many as 30 percent of survivors develop symptoms of medically induced post-traumatic stress disorder.
“Medically induced PTSD is basically PTSD that results from a sudden, catastrophic, life-threatening medical condition,” said Christina Luberto, a clinical health psychologist in the Department of Psychiatry at Mass General Hospital/Harvard Medical School. “It actually accounts for about 7 percent of all PTSD cases.”
Luberto is the founding director of the Mindful Living Center, a mental health service embedded with the Mass General Women’s Heart Health Program. The Mindful Living Center is one of the few programs in the country to integrate psychological services directly into cardiovascular care for women.
Christina Luberto.
Stephanie Mitchell/Harvard Staff Photographer
“We treat survivors whose primary presenting problem is the fear of recurrence,” she said. “They’re terrified by the uncertainty and possibility that it is going to happen again.”
Despite its prevalence, medically induced PTSD wasn’t formally recognized until the 1990s, when the Diagnostic and Statistical Manual of Mental Disorders expanded the definition to include trauma from medical events. It later tightened the criteria to sudden conditions, excluding chronic conditions like cancer or HIV. Research has shown that patients with medically induced PTSD tend to have worse recoveries and a higher risk of death than those without.
Medically induced PTSD symptoms mirror the symptoms of PTSD from external traumas, Luberto said: intrusive memories, hyperarousal, negative changes in mood or belief, and avoidance. But there are key differences.
“People often think of PTSD that results from external events like serving in combat. People may have flashbacks and intrusive memories. They’re thinking about what happened in the past. They might avoid things like celebrations with fireworks and loud noises, friends from that time, and they’re sort of able to do that,” she said. “With medically induced PTSD, the threat is not left in the past. You can’t escape the source of the ongoing threat, because the source of the threat is your own body.”
That reality makes survivors hyper-aware of physical sensations. Sweat or an elevated heart rate can trigger panic. Because exercise can mimic the sensations patients experienced during their heart attack, they may avoid working out — paradoxically, the very thing that could aid recovery and prevent future events. Others may skip medication, avoid medical follow-ups, or, conversely, over-engage with the healthcare system, frequently calling or messaging their providers.
“It’s a vicious cycle. What I hear is the future-oriented worry: ‘Is this going to happen again?’”
Christina Luberto
“There’s what we call cognitive reactivity in response to physical symptoms. ‘Why am I sweating? Why is my heart beating? Maybe it’s the coffee, but maybe it’s not. Should I go to the hospital?’ And then all of this thinking creates more physical symptoms of anxiety,” Luberto said. “It’s a vicious cycle. What I hear is the future-oriented worry: ‘Is this going to happen again?’”
Her research shows how the distressing thoughts can escalate. “Survivors start to believe different things about their body, and on some level, about the world. They believe, you know, ‘My body betrayed me. This is going to happen again. I’m not safe.’”
The Mindful Living Center, which opened in October 2023, employs an adapted Mindfulness-Based Cognitive Therapy method based on Luberto’s prior NIH-funded research. In online group therapy sessions, patients confront the source of their distress: their bodies.
“Mindfulness meditation brings you into the body, noticing the body without judgment, feeling sensations, noticing where the body can still feel safe or can still feel comfortable, and being able to regulate your attention to move it out of the body if the anxiety gets too much.”
The results are encouraging. Since it opened, the Mindful Living Center has received 181 referrals and treated 86 patients. Ninety percent of patients in the Mindfulness-Based Cognitive Therapy sessions reported improved emotional health, and 75 percent reported improved cardiac health.
“Stress and anxiety can have significant negative consequences for patients, from how they experience medical care to their ability to empower themselves to take steps to reduce future events,” said Amy Sarma, Cathy E. Minehan Endowed Chair in Cardiology at MGH and an assistant professor of medicine at Harvard Medical School. “However, most cardiologists do not have access to the resources to help their patients as we do at Mass General Brigham. Our partnership with Dr. Luberto in this unique program enables us to significantly advance the care of our patients.”
Nandita Scott, Ellertson Family Endowed Chair in Cardiovascular Medicine and the director of the Women’s Heart Health Program, highlighted the “exceptional support” the mindfulness program has received from the cardiology leadership at Mass General Brigham. “It’s well-established that mental health and cardiovascular outcomes are closely linked, yet few divisions would have had the vision or resources to fund such an initiative,” she said.
Luberto, who is also an executive faculty member in the MGH Health Promotion Resiliency Intervention Research Center and the MGH Benson-Henry Institute for Mind-Body Medicine, hopes to increase the Mindful Living Center’s offerings to other research-backed methodologies for managing medically induced PTSD. In a recent study led by UCLA doctoral student Corinne Meinhausen, with Luberto serving as a co-author, researchers reviewed therapies ranging from traditional cognitive behavioral therapy to written exposure therapy, a short five-session program in which patients write detailed accounts of the traumatic event. The written exposure therapy’s lower dropout rates and strong earlier results make it an appealing option, especially for patients reluctant to commit to longer, more intensive therapies.
Luberto said doctors can be on the lookout for PTSD symptoms resulting from traumatic medical events. The American Heart Health Association recommends screening for depression; she suggests adding PTSD screening for spontaneous coronary artery dissection patients, along with a clear treatment pathway. There is little research on risk factors or prevention of medically induced PTSD, but compassionate care during hospitalization couldn’t hurt, she said.
“There are trauma-informed care principles in mental healthcare in general that include giving patients choice. Being transparent. Considering cultural and identity factors. It’s an important research question to see if that can prevent risk, but even if it can’t, it’s just good care.”
MIT researchers have developed a reconfigurable antenna that dynamically adjusts its frequency range by changing its physical shape, making it more versatile for communications and sensing than static antennas.
A user can stretch, bend, or compress the antenna to make reversible changes to its radiation properties, enabling a device to operate in a wider frequency range without the need for complex, moving parts. With an adjustable frequency range, a reconfigurable antenna could adapt to changing environmental conditions and reduce the need for multiple antennas.
The word “antenna” may draw to mind metal rods like the “bunny ears” on top of old television sets, but the MIT team instead worked with metamaterials — engineered materials whose mechanical properties, such as stiffness and strength, depend on the geometric arrangement of the material’s components.
The result is a simplified design for a reconfigurable antenna that could be used for applications like energy transfer in wearable devices, motion tracking and sensing for augmented reality, or wireless communication across a wide range of network protocols.
In addition, the researchers developed an editing tool so users can generate customized metamaterial antennas, which can be fabricated using a laser cutter.
“Usually, when we think of antennas, we think of static antennas — they are fabricated to have specific properties and that is it. However, by using auxetic metamaterials, which can deform into three different geometric states, we can seamlessly change the properties of the antenna by changing its geometry, without fabricating a new structure. In addition, we can use changes in the antenna’s radio frequency properties, due to changes in the metamaterial geometry, as a new method of sensing for interaction design,” says lead author Marwa AlAlawi, a mechanical engineering graduate student at MIT.
Her co-authors include Regina Zheng and Katherine Yan, both MIT undergraduate students; Ticha Sethapakdi, an MIT graduate student in electrical engineering and computer science; Soo Yeon Ahn of the Gwangju Institute of Science and Technology in Korea; and co-senior authors Junyi Zhu, assistant professor at the University of Michigan; and Stefanie Mueller, the TIBCO Career Development Associate Professor in MIT’s departments of Electrical Engineering and Computer Science and Mechanical Engineering and leader of the Human-Computer Interaction Group at the Computer Science and Artificial Intelligence Lab. The research will be presented at the ACM Symposium on User Interface Software and Technology.
Making sense of antennas
While traditional antennas radiate and receive radio signals, in this work, the researchers looked at how the devices can act as sensors. The team’s goal was to develop a mechanical element that can also be used as an antenna for sensing.
To do this, they leveraged the antenna’s “resonance frequency,” which is the frequency at which the antenna is most efficient.
An antenna’s resonance frequency will shift due to changes in its shape. (Think about extending the left “bunny ear” to reduce TV static.) Researchers can capture these shifts for sensing. For instance, a reconfigurable antenna could be used in this way to detect the expansion of a person’s chest, to monitor their respiration.
To design a versatile reconfigurable antenna, the researchers used metamaterials. These engineered materials, which can be programmed to adopt different shapes, are composed of a periodic arrangement of unit cells that can be rotated, compressed, stretched, or bent.
By deforming the metamaterial structure, one can shift the antenna’s resonance frequency.
“In order to trigger changes in resonance frequency, we either need to change the antenna’s effective length or introduce slits and holes into it. Metamaterials allow us to get those different states from only one structure,” AlAlawi says.
The device, dubbed the meta-antenna, is composed of a dielectric layer of material sandwiched between two conductive layers.
To fabricate a meta-antenna, the researchers cut the dielectric laser out of a rubber sheet with a laser cutter. Then they added a patch on top of the dielectric layer using conductive spray paint, creating a resonating “patch antenna.”
But they found that even the most flexible conductive material couldn’t withstand the amount of deformation the antenna would experience.
“We did a lot of trial and error to determine that, if we coat the structure with flexible acrylic paint, it protects the hinges so they don’t break prematurely,” AlAlawi explains.
A means for makers
With the fabrication problem solved, the researchers built a tool that enables users to design and produce metamaterial antennas for specific applications.
The user can define the size of the antenna patch, choose a thickness for the dielectric layer, and set the length to width ratio of the metamaterial unit cells. Then the system automatically simulates the antenna’s resonance frequency range.
“The beauty of metamaterials is that, because it is an interconnected system of linkages, the geometric structure allows us to reduce the complexity of a mechanical system,” AlAlawi says.
Using the design tool, the researchers incorporated meta-antennas into several smart devices, including a curtain that dynamically adjusts household lighting and headphones that seamlessly transition between noise-cancelling and transparent modes.
For the smart headphone, for instance, when the meta-antenna expands and bends, it shifts the resonance frequency by 2.6 percent, which switches the headphone mode. The team’s experiments also showed that meta-antenna structures are durable enough to withstand more than 10,000 compressions.
Because the antenna patch can be patterned onto any surface, it could be used with more complex structures. For instance, the antenna could be incorporated into smart textiles that perform noninvasive biomedical sensing or temperature monitoring.
In the future, the researchers want to design three-dimensional meta-antennas for a wider range of applications. They also want to add more functions to the design tool, improve the durability and flexibility of the metamaterial structure, experiment with different symmetric metamaterial patterns, and streamline some manual fabrication steps.
This research was funded, in part, by the Bahrain Crown Prince International Scholarship and the Gwangju Institute of Science and Technology.
A meta-antenna (shiny latticed material) could be incorporated into a curtain that dynamically adjusts household lighting. Here, a prototype is seen retracted (top left), expanded (bottom), and next to the latching mechanism (top right).
The National University of Singapore (NUS) and Universiti Malaya (UM) will celebrate more than six decades of collaboration with a series of commemorative events in Perak, Malaysia, later this month. Mr Tharman Shanmugaratnam, President of the Republic of Singapore and Chancellor of NUS, accompanied by an NUS delegation, will travel to Ipoh on 25 August 2025 for a working visit. President Tharman will be accompanied by his spouse, Mrs Jane Ittogi Shanmugaratnam.
The NUS delegation will include Chairman of the NUS Board of Trustees Mr Hsieh Fu Hua, NUS President Professor Tan Eng Chye, NUS Chief Alumni Officer Ms Ovidia Lim-Rajaram, NUS Golf Captain Ms Angelia Tay, academics, golfers and staff.
NUS marks its 120th anniversary in 2025, a legacy that began in 1905 with the establishment of the Straits Settlements and Federated Malay States Government Medical School, which later became Universiti Malaya. In 1962, UM’s Singapore campus was reconstituted as the University of Singapore, which later evolved into the National University of Singapore. The strong ties between the two institutions have continued alongside the warm friendship between Singapore and Malaysia.
“The longstanding friendship between NUS and UM is rooted in a shared history and a mutual commitment to collaboration and educational excellence. As both universities mark their 120-year legacy this year, we are honoured to celebrate this enduring partnership through meaningful engagements which reflect not only our deep institutional ties, but also the warm and longstanding bonds between Singapore and Malaysia,” said Professor Tan Eng Chye, President of NUS.
Meeting and Royal Welcome in Kuala Kangsar
On the evening of 25 August, President Tharman will have a meeting with the Chancellor of UM and Sultan of Perak His Royal Highness Sultan Nazrin Muizzuddin Shah Ibni Almarhum Sultan Azlan Muhibbuddin Shah Al-Maghfur-Lah at the Istana Iskandariah in Kuala Kangsar. This will be followed by a Royal Dinner hosted by HRH Sultan Nazrin Shah and the Raja Permaisuri of Perak, Her Royal Highness Tuanku Zara Salim, for President Tharman, Mrs Tharman and the NUS delegation.
On 26 August, more than 100 golfers from NUS and UM will tee off at the Royal Perak Golf Club in the 54th UM–NUS Inter-University Tunku Chancellor Golf Tournament. First held in 1968 to strengthen ties between the two institutions, the tournament alternates venues between Malaysia and Singapore. This year’s competition in Ipoh will culminate in a prize presentation lunch, attended by the two Chancellors, Mr Hsieh Fu Hua, Chairman of the NUS Board of Trustees, Tan Sri Zarinah Anwar, Chairman of UM’s Board of Directors, Professor Tan Eng Chye, President of NUS, and Professor Dato’ Seri Ir Dr Noor Azuan Abu Osman, Vice-Chancellor of UM.
2nd UM–NUS Joint Academic Symposium: Precision Health
Also on 26 August, NUS and UM researchers will convene for the second Joint Academic Symposium held alongside the tournament, this year focusing on precision health. The symposium will feature 10 keynote addresses and technical talks by leading scholars and clinicians from both universities, alongside a special presentation on the shared history of Singapore and Malaysia. President Tharman and HRH Sultan Nazrin Shah will preside over the keynote and shared history presentations. The 2024 joint academic symposium, which was held at NUS, was on biomedical engineering.
Kostin was known for his commitment to teaching fundamental principles in a way that demanded excellence from his students but that empowered them in their own careers.
Report on classroom literature shows staying power for ‘Gatsby,’ ‘Of Mice and Men,’ other classics. Time to move on?
Look back 40 years and you’ll see a lot of seismic change. The rise of the Internet, the smartphone revolution, and now AI everywhere. The end of the Cold War and the dawn of many messier conflicts. The overturning of paradigms of gender and sexuality, and then the backlash.
What are young people reading to help them make sense of their world? According to a recent report, pretty much the same things their parents read.
That report — compiled by researchers Kyungae Chae and Ricki Ginsberg for the National Council of Teachers of English — queried more than 4,000 public school teachers in the U.S. about what they assign students in grades six through 12.
It found little movement at the top of the English curriculum. F. Scott Fitzgerald’s “The Great Gatsby,” John Steinbeck’s “Of Mice and Men,” and a few Shakespeare tragedies occupy half of the top 10 most-assigned spots — just as they did in 1989. Even back in 1964, the top 10 was remarkably similar: If two Dickens novels have been dropped, “Hamlet” and “Macbeth” have not.
Classics are “classic” for a reason, of course. But that English-class inertia coincides with a trend that troubles educators, authors, and many parents: a long-term slide in the habit of reading among young Americans.
Some worry that — in a diverse and polarized nation — books that once felt accessible now feel remote or impenetrable, or that cultural conservatism or education bureaucracies have kept the curriculum from a healthy evolution.
With their many avid readers, Harvard’s classrooms contain almost as many views of the problem, if it is one, of curricular stagnation.
Stephanie Burt, the Donald P. and Katherine B. Loker Professor in the Department of English, made headlines last year as she launched a course on Taylor Swift. It was, in part, a self-conscious bid to use the world’s most popular songwriter as a gateway drug to Wordsworth and hermeneutics.
But Burt — also a working poet — said that her embrace of Swift is no sign that she has moved beyond, say, John Donne. To teach Shakespeare to young people, she said, is “not conservatism — it’s conservation, like the protection of old-growth forests.”
Rosette Cirillo, too, sees pedagogical value in true classics from the top of the English-language pantheon — though for a different reason.
Today, Cirillo is a lecturer and a teacher educator at the Harvard Graduate School of Education. But not so long ago, she was teaching eighth-grade English in Chelsea, Massachusetts, a largely Latin-American enclave where nearly half the students are classed as English learners.
“If I had an eighth-grader who went on to Harvard after he graduated Chelsea High, and he had never read Shakespeare, he would be at a serious disadvantage,” Cirillo said.
And, she stresses, she’s arguing less in terms of assimilation than of challenge.
“If I don’t understand ‘The Great Gatsby’ — this story of the American dream — and the idea of a masculine reinvention in order to achieve something, then I don’t understand the mythology of America enough that I could critique it, that I can say, ‘I don’t want that,’” Cirillo said. “We’re thinking about building a language and culture of power and building access for our students.”
“Better readers are better at understanding the multiple points of view that might be held about a civic or a moral issue. They’re less likely to think that if you disagree with them, it’s because you’re stupid.”
Catherine Snow
The teachers and researchers who spoke to the Gazette were divided on whether Steinbeck, Fitzgerald, and Harper Lee still deserve their ubiquity.
“To Kill a Mockingbird,” which Lee published in 1960, could be considered the foundational American text of the ‘white savior’ archetype, Burt said. And, yes, Steinbeck was a Nobel laureate in literature, but with “Of Mice and Men” — “the point is that somebody cognitively disabled is probably going to commit a murder … the high school curriculum would be better off without that,” she said.
And while Burt praised “Gatsby” as a great option for many teens, Catherine Snow was less charitable.
“I always hated that book,” she said.
Snow, a legendary literacy researcher, recently retired from the Harvard Graduate School of Education. She argued that hard evidence still shows real benefits that come from building readers.
Not only do well-read people perform better on tests of general knowledge — but as early as elementary school, Snow said, “better readers are better at understanding the multiple points of view that might be held about a civic or a moral issue. They’re less likely to think that if you disagree with them, it’s because you’re stupid … I think that’s pretty important.”
Digesting a text, analyzing tone and symbolism, understanding meaning and perspective — it’s all still useful. But, Snow said, some older books may no longer be ideal teaching tools.
“You can make all of those hoary texts relevant to students today,” Snow said. (True even of “Gatsby,” she joked: “Here’s a chance to learn about some really boring, worthless people, and how badly they’ve screwed up their lives.”)
“But,” Snow added, “an easier and perhaps more efficient approach would be to try to think about a selection of texts which are more automatically relevant that can be used to develop the same very important cognitive and linguistic and analytic skills.”
“Harry Potter” and “The Hunger Games” traffic, too, in “big, inherent, cultural themes and memes,” she said, and neither is “particularly easy reading.”
The cultural phenomena around those two series defied a decadeslong slump in pleasure reading among youth. In light of that trend, Cirillo and others see room to renovate the curriculum in the margins.
For Cirillo, stories by writers of color — from Toni Morrison to Junot Díaz — should by now be standard fare, part of a “new canon” to be read alongside the old one.
Burt’s chief concern, meanwhile, is the smartphone and its iron grip on our attention. “We’re living through a change in media that comes from a change in technology that is — unfortunately — at least half as consequential as the printing press,” Burt said. “I hate it; it makes me sad. But it’s not something we can wish away.”
Burt proposed shelving “Of Mice and Men” in favor of Frederick Douglass’ first autobiography, as “one piece of American prose literally everyone should have to read.”
Whether or not it can be neatly quantified, teachers of English still believe that there is something irreplaceable about profound immersion in the world of a book. Joining their number is M.G. Prezioso, a 2024 Ed School grad now conducting postdoctoral research on that very phenomenon.
In a recent journal article, Prezioso found a cyclical relationship between frequent reading and “story-world absorption” — a virtuous cycle of joy in reading that might lessen the need for external motivators.
And her ongoing research of grade-school students in Massachusetts and Pennsylvania has yielded early but promising correlations between that kind of absorption and skill at reading comprehension of the kind measured by a standardized test.
But that doesn’t mean abandoning what is already taught, Prezioso said. “There tends to be this dichotomy, first of all, between classic, canonical books versus books that are fun, as if canonical books can’t be engaging or dramatic or enjoyable to read.”
Prezioso was reminded of that in her surveys of high school students. What did they find most engrossing? “Harry Potter,” “The Hunger Games,” Edgar Allan Poe — and “Of Mice and Men.”
Why Malcolm X matters even more 60 years after his killing
New book by Mark Whitaker examines growth of artistic, political, cultural influence of controversial Civil Rights icon
Christina Pazzanese
Harvard Staff Writer
8 min read
Malcolm X was the provocative yet charismatic face of Black Nationalism and spokesman for the Nation of Islam before he was gunned down at an event in New York City on Feb. 21, 1965, after breaking with the group.
In a new book, “The Afterlife of Malcolm X: An Outcast Turned Icon’s Enduring Impact on America” (2025), journalist Mark Whitaker ’79, explores how the controversial Civil Rights figure’s stature and cultural legacy have only grown since his death.
With dazzling verbal flair, Malcolm X’s advocacy for Black self-determination and racial pride stirred many of his contemporaries like Muhammed Ali, John Coltrane, Maya Angelou, and the founders of the Black Panther party, and helped spur the Black Arts Movement and the experimental genre known as “Free Jazz.”
Whitaker notes that even decades later Malcolm X’s words and ideas have continued to influence new generations of artists and activists, including NBA Hall of Famer Kareem Abdul-Jabbar, playwright August Wilson, filmmaker Spike Lee, pop star Beyoncé, and rappers Tupac Shakur and Kendrick Lamar, among others.
Whitaker recently spoke with the Gazette about why Malcolm X continues to shape American culture. The conversation has been edited for clarity and length.
You say Malcolm X’s cultural influence is even greater than when he was alive. Why is that?
You have to start with “The Autobiography of Malcolm X” [co-authored by Alex Haley]. Many more people, even in the ’60s but certainly subsequently, have gotten to know him through “The Autobiography” than anything else. It’s an extraordinary book. There’s a reason why it’s one of the most read and influential books of the last half century. There are few books by public figures of his stature where you experience this extraordinary personal journey he underwent, from losing his parents at a young age to becoming a street hustler and going to prison, and then turning his life around through the Nation of Islam, becoming a national figure, but then becoming disenchanted with the Nation and with Elijah Muhammad, going out on his own, making a pilgrimage to Mecca, traveling the world, reassessing all of his thoughts and beliefs about white people and separatism and so forth. So that’s extraordinary.
“One of the things that’s interesting is he keeps getting rediscovered generation after generation by young people.”
One of the things that’s interesting is he keeps getting rediscovered generation after generation by young people. I think he spoke to young people for a variety of reasons. One is the reality of race that he described was closer to what they were witnessing than the “I Have a Dream” speech.
There was a hard-headed realism about his analysis of race relations that spoke to young people. Even before you get to politics, his emphasis was on psychology, on pride, and on self-belief and on culture. The belief that Black folks had to start with celebrating themselves and their own culture and their own history — that was extremely appealing to subsequent generations.
I also think there was just something about the way he communicated. There’s a reason that the pioneers of hip-hop thought that you could take snippets of his speeches and put them in the middle of raps, and it would still sound like it belonged. There was something incredibly direct and pithy and honest about the way he communicated.
You put those elements together — his hard-headed analysis, his emphasis on culture and self-belief and pride, and his extraordinary communication — generation after generation of people rediscover that and feel that all of those things are still very powerful.
So many important Black artists, writers, musicians, and activists of that period had either a personal relationship with Malcolm X or said they had an epiphany of sorts after listening to him speak. Why do you think that was?
Part of it was that he did believe, very strongly, that politics is downstream from culture. That was something that he very much believed and preached.
It was interesting because his parents were Black nationalists of the Marcus Garvey generation. And so followers of Marcus Garvey of their generation basically said, “Things are so bad for Black people in America that they have to go someplace else, whether it be someplace in Africa or the Caribbean.” There was this idea of a Black homeland, someplace else that everybody would get on ships and go to.
“In his view, the way Black folks should practice nationalism is by staying in America but demanding their own culture, which began with studying their own history.”
Malcolm explicitly said, “We are a nation, but we belong here.” In his view, the way Black folks should practice nationalism is by staying in America but demanding their own culture, which began with studying their own history. In his separatist era, it was literally we have our own networks of support. He was a big believer in Black business by and for Black people. That was a cultural project as much as a political project.
He lived in an era when a lot of Black culture, even though it was separate from white culture, sought to emulate white culture. A lot of the societies and the rituals were Black versions of white rituals. And he said, “That’s a form of brainwashing. We shouldn’t seek to be like white people. We should have our own culture.”
So, starting with the Black Arts Movement and the “Free Jazz” movement in the ’60s, and then later, the hip-hop generation and today’s artists like Kendrick Lamar, Beyoncé, all the great artists who still invoke him, that’s the message they’re picking up on as much as his political message.
There’s also something just so supremely confident about him that people relate to. He was unapologetically who he was. He’s preaching Black pride and so forth with such supreme elegance and confidence and humor. That’s always appealing.
One chapter looks at Malcolm X as a hero to the political left and right. President Barack Obama has talked about how influential the autobiography was on him as a teenager, and Supreme Court Justice Clarence Thomas has also spoken about his attraction to Malcolm X and his message of self-determination when he was in college. Few political or cultural figures today have that kind of appeal. What do you attribute that to?
There are people on the left who revere Malcolm X who were appalled that Clarence Thomas would say he’s also a hero to him, and feel like Clarence Thomas just cherry-picked the parts of his message that are convenient to him — the emphasis on Black business, the skepticism about integration and so forth. I spent a lot of time researching that chapter and talking not to Thomas himself, but to his clerks and people who had written about his interest in Malcolm X, and I think it was sincere.
Malcolm X was a truth teller. I don’t think he was interested in being a hero to white people. He would go around saying things like, “I prefer the white racist who at least has his cards on the table to the white liberal who can’t be trusted.” And as we see today, people embrace people who attack the people who they oppose.
“Malcolm X came to Harvard in 1961 and then twice in 1964 to talk with Harvard Law School students and to debate faculty. He was known for his willingness to speak in all sorts of settings, whether a college campus, a street corner, or a TV talk show. ”
Would Malcolm X be surprised to find that he’s still so influential?
It’s a tricky thing for biographers to say what would he have thought. It’s presumptuous, but one of the things that is clear is that people at the time who were followers of his said his message and his influence will outlive him. Actor Ossie Davis said that in his eulogy. He said, “What we put in the ground now is only a seed which will rise up to meet us.”
Sociologist Harry Edwards, when he was organizing a Malcolm X day at San Jose State — this was a year after King’s assassination — people said, “Why all this fuss about Malcolm X and not about King?” And Harry Edwards said the thing about Malcolm X is it’s not so much what he did during his lifetime, it’s what he inspired in others, which will continue. There’s something about Malcolm that is still alive in the influence that he’s having on all these other people.
The 10-day immersion course was the first international academic experience for many of the students from the University's first-generation and lower-income (FLI) community.