A collaborative team of researchers analyzed the information-seeking styles of more than 480,000 people from 50 countries and found that gender and education inequality track different types of knowledge exploration. Their findings suggest potential cultural drivers of curiosity and learning.
Since its founding in 2008, the short-term homestay platform Airbnb has expanded to 100,000 cities in more than 220 countries, and, according to data from the company, 1.5 billion guests had stayed in Airbnb-listed properties through 2023.
The MIT Kavli Institute for Astrophysics and Space Research (MKI) is a project lead for one of two finalist missions recently selected for NASA's new Probe Explorers program. Working with collaborators at the University of Maryland and Goddard Space Flight Research Center, the team will produce a one-year concept study to launch the Advanced X-ray Imaging Satellite (AXIS) in 2032.
Erin Kara, associate professor of physics and astrophysicist at MIT, is the deputy principal investigator for AXIS. The MIT team includes MKI scientists Eric Miller, Mark Bautz, Catherine Grant, Michael McDonald, and Kevin Burdge. Says Kara, "I am honored to be working with this amazing team in ushering in a new era for X-ray astronomy."
The AXIS mission is designed to revolutionize the view scientists have of high-energy events and environments in the universe using new technologies capable of seeing even deeper into space and further back in time.
"If selected to move forward," explains Kara, "AXIS will answer some of the biggest mysteries in modern astrophysics, from the formation of supermassive black holes to the progenitors of the most energetic and explosive events in the universe to the effects of stars on exoplanets. Simply put, it's the next-generation observatory we need to transform our understanding of the universe."
Critical to AXIS's success is the CCD focal plane — an array of imaging devices that record the properties of the light coming into the telescope. If selected, MKI scientists will work with colleagues at MIT Lincoln Laboratory and Stanford University to develop this high-speed camera, which sits at the heart of the telescope, connected to the X-ray Mirror Assembly and telescope tube. The work to create the array builds on previous imaging technology developed by MKI and Lincoln Laboratory, including instruments flying on the Chandra X-ray Observatory, the Suzaku X-ray Observatory, and the Transiting Exoplanet Survey Satellite (TESS).
Camera lead Eric Miller notes that "the advanced detectors that we will use provide the same excellent sensitivity as previous instruments, but operating up to 100 times faster to keep up with all of the X-rays focused by the mirror." As such, the development of the CCD focal plane will have significant impact in both scientific and technological realms.
"Engineering the array over the next year," adds Kara, "will lay the groundwork not just for AXIS, but for future missions as well."
The MIT Stephen A. Schwarzman College of Computing has announced the launch of a new program to support postdocs conducting research at the intersection of artificial intelligence and particular disciplines.
The Tayebati Postdoctoral Fellowship Program will focus on AI for addressing the most challenging problems in select scientific research areas, and on AI for music composition and performance. The program will welcome an inaugural cohort of up to six postdocs for a one-year term, with the possibility of renewal for a second term.
Supported by a $20 million gift from Parviz Tayebati, an entrepreneur and executive with a broad technical background and experience with startup companies, the program will empower top postdocs by providing an environment that facilitates their academic and professional development and enables them to pursue ambitious discoveries. “I am proud to support a fellowship program that champions interdisciplinary research and fosters collaboration across departments. My hope is that this gift will inspire a new generation of scholars whose research advances knowledge and nurtures innovation that transcends traditional boundaries,” says Tayebati.
"Artificial intelligence holds tremendous potential to accelerate breakthroughs in science and ignite human creativity," says Dan Huttenlocher, dean of the Schwarzman College of Computing and Henry Ellis Warren Professor of Electrical Engineering and Computer Science. “This new postdoc program is a remarkable opportunity to cultivate exceptional bilingual talent combining AI and another discipline. The program will offer fellows the chance to engage in research at the forefront of both AI and another field, collaborating with leading experts across disciplines. We are deeply thankful to Parviz for his foresight in supporting the development of researchers in this increasingly important area.”
Candidates accepted into the program will work on projects that encompass one of six disciplinary areas: biology/bioengineering, brain and cognitive sciences, chemistry/chemical engineering, materials science and engineering, music, and physics. Each fellow will have a faculty mentor in the disciplinary area as well as in AI.
The Tayebati Postdoctoral Fellowship Program is a key component of a larger focus of the MIT Schwarzman College of Computing aimed at fostering innovative research in computing. As part of this focus, the college has three postdoctoral programs, each of which provides training and mentorship to fellows, broadens their research horizons, and helps them develop expertise in computing, including its intersection with other disciplines.
Other programs include MEnTorEd Opportunities in Research (METEOR), which was established by the Computer Science and Artificial Intelligence Laboratory in 2020. Recently expanded to span MIT through the college, the goal of METEOR is to support exceptional scholars in computer science and AI and to broaden participation in the field.
In addition, the Social and Ethical Responsibilities of Computing (SERC), a cross-cutting initiative of the MIT Schwarzman College of Computing, offers researchers exploring how computing is reshaping society the opportunity to participate as a SERC postdoc. SERC postdocs engage in a number of activities throughout the year, including leading interdisciplinary teams of MIT undergraduate and graduate students, known as SERC Scholars, to work on research projects investigating such topics as generative AI and democracy, combating deepfakes, examining data ownership, and the societal impact of gamification, among others.
Last month, the MIT Office of Graduate Education celebrated National Student Parent Month with features on four MIT graduate student parents. These students’ professional backgrounds, experiences, and years at MIT highlight aspects of diversity in our student parent population.
Diana Grass is one of MIT’s most involved graduate student parents. Grass is a third-year PhD student in medical engineering and medical physics in the joint Harvard-MIT Health Sciences and Technology program, and the mother of two children. As co-founder and co-president of MIT’s Graduate First Generation and Low-Income student group (GFLI@MIT), Grass is a strong advocate for first-generation grad students and student parents.
Fifth-year civil and environmental engineering PhD student Fabio Castro is a new father. Prior to MIT, he was an engineer and logistics manager at an energy firm in Brazil, and volunteered with Doctors without Borders in South Sudan. He and his wife, Amanda, welcomed their daughter, Sofia, last fall.
First-year MIT Sloan MBA student Elizabeth Doherty shared her experience as a career changer and mother of two young children. Doherty began her career as a lower elementary school teacher, working in both public and private schools. After switching gears to work as a senior digital learning specialist at Bain & Co., she recognized the importance of company culture, which led her to pursue a master’s degree in business administration.
Matthew Webb is working on his second MIT degree as a second-year PhD student in the Center for Transportation and Logistics. He shared the ways in which his grad student experience is different now as a father of three, than when he was a master’s student in the Operations Research program without children.
All four student parents came from different professional backgrounds and departments, but one theme was consistent in all their stories: the support of the MIT families community. From pitching in to help new parents to coordinating play dates and sharing information, MIT’s student parents are there for one another.
For Doherty, family-friendliness was a top priority when she selected an MBA program. MIT stood out to her because of the family housing, the on-campus childcare, and the opportunities to meet other student families. Doherty felt affirmed in her decision to attend MIT when she enrolled and the MIT Sloan School of Management reached out with a welcoming note and a gift. “It highlighted how thoughtful MIT has been about creating a strong infrastructure for student parents,” she says.
Grass points to the importance her family placed on moving into an on-campus residence, as her family lacked community in their previous off-campus home. This move to MIT’s campus added convenience to the family’s daily routine, and helped them meet other student families.
Before returning to MIT for his PhD, Webb was unaware of the support offered to graduate student families. He was pleasantly surprised to discover the Office of Graduate Education’s resources and programming for families through an email his first semester. His wife Rachel and their three children also take advantage of the activities hosted by MIT Spouses and Partners Connect while Webb goes to class. Some favorites have included ice cream and bubble tea outings, “crafternoons,” and going on a tour of Fenway Park.
Castro remembers how his family housing neighbors showed up for him and his family when they needed it most. In anticipation of their first child’s birth, Castro and his wife, Amanda, arranged for Amanda’s parents to come to Cambridge to help them in the early weeks as first-time parents. When these plans unexpectedly fell through, their community in Westgate stepped up. For weeks, other MIT families came by to teach them how to care for their newborn, and dropped off meals at their door.
He was touched by these gestures — the support was a huge benefit of choosing to live on campus, and something that would not have happened had he lived in an off-campus apartment. “It’s something I’ll never forget,” Castro says.
Grappling with how clearings may support rainforest animal life
Anne J. Manning
Harvard Staff Writer
4 min read
New research offers detailed overview of layout, makeup of canopy gaps in Congo
“Tropical rainforest” conjures images of close-packed trees, dense humidity, a home for plenteous and diverse animal life.
But rainforests in the Congo Basin of west-central Africa also host lesser-known clearings called bais. Some stretch the length of 40 football fields; others only a few hundred feet. Though not widespread, they appear to play a big role in making the rainforest a highly complex, biodiverse habitat, and new research may boost understanding of how and why.
A new study in the journal Ecology provides an unprecedented, detailed overview of bais’ layout, makeup, and abundance across more than 5,000 square miles of conserved forest in Odzala-Kokoua National Park, Republic of the Congo. Culminating more than two years of field study, the work was led by Evan Hockridge, a Griffin Graduate School of Arts and Sciences student in the lab of Andrew Davies, assistant professor in the Department of Organismic and Evolutionary Biology.
“This was a huge data collection effort, involving everything from drones to soil measurements to camera trapping to identification of plant species,” Hockridge said.
Hockridge originally set out to study how large African animals engineer their own ecosystems, but quickly realized megafauna cannot be understood outside the bais they inhabit.
“Animals are extremely attracted to these giant clearings in the middle of the forest, including many endangered animals like the Western lowland gorilla and the African forest elephant,” Hockridge said. “These keystone conservation priority species will spend enormous portions of their lives basically just moving between bais.”
Coming upon a bai after hiking through thick canopies of trees is “stunning,” according to Hockridge, who spent several months of 2021 in Congo collecting data and leading teams.
Without warning, the trees stop, opening into a clearing where forest buffalo often lounge among short grasses and sedges. A stream cuts through the expanse. Flocks of a thousand African green pigeons land nearby to gather salt and other soil nutrients. “It’s like something out of a picture book, but the picture book doesn’t exist,” Hockridge said.
For their study, the scientists developed a technically sophisticated remote-sensing protocol using drone-based Light Detection and Ranging (Lidar) and satellites, producing models and maps of bais across the vast landscape of the Congo basin. They found many more bais than anyone had expected — more than 2,000 distinct ones in the national park, as opposed to the informally counted 250 or so.
Yet the total habitat that bais encompass is quite small — less than 0.2 percent of the entire national park, according to the research. Varying in size, they also tend to be clustered together, which could ease conservation efforts, Hockridge said.
The analysis also unveiled a tantalizing new insight into the biological makeup of bais: stark differences in plant compositions between those frequented by gorillas, versus those frequented by elephants. They’re not sure why.
“There’s a great need to understand what’s happening with these bais because they’re so important to organisms we’re trying to conserve,” Hockridge said. “Our goal is to understand how animals are interacting with these clearings. Are they making them? How dependent are they on them? Are these clearings stable over time?” Their next study may delve deeper into these questions, Hockridge said.
Paper authors include collaborators Gwili Gibbon, head of research and monitoring at Odzala-Kokoua National Park, and Sylvain Ngouma and Roger Ognangue, research “ecomonitors” at the park who served as the Harvard team’s local experts in the area’s biology and botany.
“This work would have been impossible without them,” Hockridge said. “They’re the most quintessential partners in the work we do.”
The Congo Basin’s rainforests offer so much more than just the carbon they store, noted Davies, “and we are still just barely scratching the surface of what we know about them.”
“This study helps us understand a little bit more about their functioning, and the treasure trove of biodiversity they hold, which only inspires and excites us to keep exploring and discovering more of their secrets,” Davies said.
Stroboscopic technique uses darkness to shine light on the science of movement
Exceptional student athletes, artists, and performers aren’t hard to come by under the bright lights of Harvard’s sports arenas and performance spaces. These images, however, were taken in the dark — a necessary technical requirement to make images using stroboscopic flash.
First pioneered in the 19th century by Eadweard Muybridge to scientifically study the motion of horses galloping, the process also had clear artistic merit, especially as used by the more contemporary Harold E. Edgerton and Gjon Mili.
How we did it
During photoshoots the students and I choreograph a short movement, usually one to three seconds in length. Setting my shutter speed to match this, I place my camera on a tripod facing the student and point my flash unit toward the student’s path of motion. I then turn off the lights and make adjustments to ensure proper exposure of the photos.
As students perform, the flash unit fires repeatedly, each flash creating another likeness of the person. We continue to take photos and adjust variables until we get an image that pleases us both. As a result, the photos in this project are all created in-camera and are not the result of using Photoshop to put multiple images together. Completing these shoots — all in a completely dark room — provided a collaborative and technically challenging project that yields delightfully unique results.
Scarier than ghosts: A nurse superfan and a spouse with secret rooms
Steven Pinker, Maria Tatar, other scholars recommend books for Halloween season
6 min read
‘Adela’s House,’ a short story in ‘Things We Lost in the Fire’
Recommended by Laura van den Berg, Briggs-Copeland Lecturer in the Creative Writing Program; author of “The Third Hotel”
In “Adela’s House,” a brother and sister enter a derelict house, along with their neighbor, Adela. The house quickly proves to be nightmarish, possessed with its own terrible life force; once inside Adela is never seen again. While the plot summary of “Adela’s House” might sound like a conventional haunted house tale, Mariana Enríquez is after something far more charged. In her translator’s note, Megan McDowell writes that “what there is of gothic horror in the stories in ‘Things We Lost in the Fire’ mingles with and is intensified by their sharp social criticism … most of Mariana’s characters exist in a border space between the comfortable here and a vulnerable there; this latter could be a violent slum or a mysteriously living house, but it operates according to an unknown and sinister rationale, and it is frighteningly near.” In Enríquez’s hands, the house at the center of “Adela’s House” is a conduit for exploring both individual and collective trauma, for showing us just how close at hand the ghosts of the past are.
Bluebeard
Recommended byMaria Tatar, John L. Loeb Professor of Germanic Languages and Literatures and of Folklore and Mythology
A man, a woman, and a house with a chamber, its floor awash in blood, with corpses hanging from hooks in the walls. These are the main features of “Bluebeard,” a horror story in which the title figure tests the obedience of his wife by handing her a key and telling her that she may open any door but the one that key fits. Curiosity gets the better of her, and, once she sees the victims of her husband’s rage, she flees, dropping the key in the pool of blood. Just as Bluebeard is about to execute his wife (he sees the telltale blood on the key), the wife’s brothers come to her rescue.
For many years, the husband’s homicidal history in this folktale took a back seat to the wife’s curiosity, which was inflected morally as sexual infidelity. Today, “Bluebeard” has almost fallen into a cultural black hole, but the story still flashes out at us in Charlotte Brontë’s “Jane Eyre,” Richard Wright’s “Black Boy,” and Margaret Atwood’s “Robber Bride.” The Hollywood Dream Factory, which gave us Bluebeard films like “Rebecca” and “Secret Beyond the Door,” has now recycled the old horror story (with a perverse twist) in “Get Out” and “Ex Machina.” Presto! Bluebeard has become a new kind of monster, a seductive femme fatale who has become as dangerous as her folkloric forebear and who reveals to us a host of new cultural anxieties about female intelligence and ingenuity.
Dark Harvest
Recommended bySteven Schlozman, Assistant Professor of Psychiatry, Harvard
This book is among the best American horror novels ever written. It mixes Americana with a sense of resigned but terrifying fatalism, adding just a tincture of the occult.
Picture a small New England town, normal in all ways except one day a year when everyone knows that the harvest yields something foul. It combines normalcy and gore, letter jackets and morality, and most importantly, the illusion that you can escape when you never really can.
Dracula
Recommended by David Scadden, Gerald and Darlene Jordan Professor of Medicine
I study the blood so it has to be Bram Stoker’s “Dracula.”
It may not be high art, but it captures the tensions of science and myth, morality and bestiality, the familiar and the foreign with page-turning suspense. Blood embodying both regenerative life and corrupting disease is not just a literary conceit in either the book or, as far as I can tell, in life; it rings true — chillingly true — and is worth thinking about.
Misery
Recommended by Steven Pinker, Johnstone Family Professor of Psychology
My reactions to horror fiction spring from my world view as a scientific skeptic who is convinced that mental life depends entirely on an intact brain. That means I’m incapable of experiencing frisson at the antics of ghouls, zombies, demons, curses, dybbuks, and other paranormal mischief-makers — they come across as kitschy, not horrific. At the same time my awareness of human depravity is all too acute, and I can be suitably chilled by the prospect of a character’s ingenuity mobilized in the service of malevolent passions like revenge, manipulation, or sexual jealousy. “Cape Fear” and “Fatal Attraction” are deliciously terrifying, but as a writer I’d have to single out “Misery,” which brings to life the mixed blessing of having devoted fans.
Strange Practice
Recommended by Samantha DeWitt, Resource Sharing Specialist, Widener Library
This series follows Greta Helsing, an English doctor who treats supernatural beings — “vocal strain in banshees, arthritis in barrow-wights and entropy in mummies.” What could be better? My favorite was a sweet baby ghoul with a fever, who “wouldn’t even touch her nice rat.” (It was an ear infection, of course, an illness ubiquitous to all children.) Vivian Shaw’s storytelling could be gimmicky, but it isn’t. These well-written books are an absorbing and fun escape into a world where the supernatural is routine.
The Very Secret Society of Irregular Witches
Recommended byHannah Hack, University Archives Administrative Coordinator
This book is a whimsical, heartfelt, and at times laugh-out-loud tale of a young, lonely witch trying to find her place. It has an assortment of odd yet lovable characters (including a grumpy librarian and possibly murderous children), lots of magic, and a touch of romance.
Wuthering Heights
Recommended by Min Jin Lee, 2018-2019 Catherine A. and Mary C. Gellert Fellow at the Harvard Radcliffe Institute; author of “Pachinko”
I’m a coward and can get spooked by my shadow, so I avoid stories and any visual media gory or frightening. Not a big fan of Halloween. Life and Washington, D.C., are plenty scary enough. That said, I am very interested in any narrative about a haunting love. I can think of few stories with the kind of obsessive romance that rival “Wuthering Heights” — which has a ghost, forbidden desire, pathological love triangles, class and ethnic prejudices, intrigue, rivalries, and some good old-fashioned anguish. Catherine is kinda bonkers, but Heathcliff has the hots for her, and by gosh, he suffers for it.
Amid Hurricane Milton’s devastation, a sliver of good news
Cellphone data suggest evacuation mandates, warning systems worked
Anna Lamb
Harvard Staff Writer
4 min read
Earlier this month Hurricane Milton caused an estimated $50 billion in damage and claimed the lives of at least 14 people, yet didn’t deliver the scale of destruction some had feared.
Preparedness seems to have played a role in Milton’s relatively low death toll, according to a panel recently hosted by Harvard’s Salata Institute for Climate and Sustainability. This is “good news” according to Satchit Balsari, co-director of the research platform CrisisReady, which uses cellphone data to study the travel patterns of people in disaster zones.
“Warning systems and evacuations and people getting used to those risks and hardening their homes actually make a difference,” said Balsari, associate professor in emergency medicine at Harvard Medical School and Beth Israel Deaconess Medical Center. “Evacuation hit 80 percent to 90 percent in some of the key areas.”
Those key areas, concentrated mostly along Florida’s coast, are “mandatory evacuation” zones. Further inland, he said, evacuation rates dropped to around 45 to 50 percent. “So yes, while we’re celebrating that mandates work, about half the population had not evacuated.”
He added that people with pets, elderly residents, or residents living in fortified buildings were likelier to stay put. Another factor in evacuation rate, according to Balsari: whether a population has experienced a disaster in living memory. As recently as 2022, Florida was struck by the deadly Category 4 storm Hurricane Ian.
“The storm dropped only 10 inches of rain in western North Carolina, but there are over 250 people dead. They weren’t expecting flooding there.”
Daniel Schrag
Balsari also pointed to the Bangladesh cyclones of 1990 and 1994.
“What’s interesting is they were almost the same storm. They had almost the same track, same intensity, made landfall in the same place. In 1990 the cyclone hit in Bangladesh, and 138,000 people died. Three years later, the same storm hits, 350 people died.”
The difference he said, was that by 1994 the government had invested in an early warning system and constructed concrete bunkers across the country where people could ride out the storm.
“Something like 450,000 people sheltered in those bunkers,” Balsari said. “And 350 people died.”
Hurricane Helene, a Category 4 storm which hit the South just weeks before Milton, is tied to more than 200 deaths, with economic losses estimated at $250 billion.
“The storm dropped only 10 inches of rain in western North Carolina, but there are over 250 people dead, still, I think, about 100 missing, and hundreds and hundreds of homes destroyed,” said Daniel Schrag, the Sturgis Hooper Professor of Geology and professor of environmental science and engineering. “They weren’t expecting flooding there. In North Carolina, they weren’t expecting the kind of impacts from the storm.”
Referring to Milton, Balsari added: “A lot of people who died in this hurricane died not because of the hurricane, but because of tornadoes that were spawned by the hurricane.
“So that’s an interesting phenomenon. And it suggests it’s the surprise element [that’s deadly].”
However, no matter how prepared a locale is, there are some things you cannot anticipate. Balsari pointed to an epinephrine shortage in the wake of Hurricane Helene.
“The storm just unleashed tons of bees, and they have such a huge spike in bee stings that a lot of the relief organizations are actually trying to get more epinephrine to North Carolina as soon as possible.”
Balsari cautioned that we’re only starting to understand the full toll of Hurricanes Milton and Helene.
“You lose cellphone coverage, power is lost at home, your nebulizer is not going to work, you cannot refrigerate your insulin, and access to dialysis centers are sometimes interrupted. In the couple of months after a hurricane, people continue to die at a higher rate than expected.”
New Herbaria director studies plants via satellite and microscope for insights on changing planet
Anne J. Manning
Harvard Staff Writer
4 min read
The 5 million specimens of pressed and dried plants, algae, and fungi in the Harvard University Herbaria’s collection — among the world’s largest — must be celebrated and protected for their own sake, and for their role in deepening our understanding of a changing planet, says newly appointed Director Jeannine Cavender-Bares.
“Biodiversity collections house the knowledge we have about organisms, their taxonomies, their names, and the whole histories of their discovery,” said Cavender-Bares, a plant ecologist and professor in the Department of Organismic and Evolutionary Biology.
The new director has spent her career studying plant biology from the smallest physiological features to sweeping habitats made visible by advances in satellite-based remote sensing. Solving the related crises of climate change and biodiversity loss requires discovery at many different scales, she said.
Cavender-Bares points to a research project she led that mapped swaths of diseased oak trees so they could be culled to prevent spread. On the cellular level, her teams examined tissue of trees stricken with oak wilt disease to understand immune response. Using aircraft, they measured electromagnetic information from affected forests, detecting sick and healthy canopies.
“This is an example of how we’re moving from cells and details of the anatomy of a plant to large, regional remote sensing for management,” she explained.
This and other projects fall under a National Science Foundation-funded Biology Integration Institute based at the University of Minnesota that Cavender-Bares leads, called ASCEND, aimed, in part, at training the next generation of leaders in spectral biology and its many applications.
As a leading expert in spectral biology, Cavender-Bares uses the interaction between light and plant matter to reveal unique, fingerprint-like information about plants’ chemistry, structure, and cellular function. Such information can be gathered from handheld spectroradiometers measuring how leaves reflect light; also from aircraft and satellite sensors, like those planned for NASA’s Surface Biology and Geology mission and the European Space Agency’s CHIME mission, which will capture sweeping hyperspectral information over landscapes for monitoring agriculture practices and soil health.
Cavender-Bares is initiating an effort toward spectral digitization of herbarium specimens, not only at Harvard but among a network of academic and institutional herbaria worldwide.
“We’re working with other herbaria to get all the protocols worked out and figure out the right way to do this, so that we don’t have 10 different herbaria doing this 10 different ways,” she said.
The Herbaria will soon mark the 100-year anniversary of one of its largest and most important collections, the Farlow Herbarium of Cryptogramic Botany, which contains 1.4 million specimens of lichenized and non-lichenized fungi, bryophytes, and algae. Luminaries in mycological sciences and botany will gather at Harvard to reflect on the collection’s history, and share knowledge about everything from the foundational importance of fungi in ecosystems to the latest science of rusts, which are pathogens that cause plant fungal diseases.
Cavender-Bares hopes events like the Farlow celebration will give the Herbaria a forward-facing outlook and underscore for the public its significance as a hub of scientific knowledge. Biodiversity collections show us where we’ve been and what we could lose, she noted.
“We’re facing choices about converting our forests into solar panels,” she said. “But if we’re not simultaneously thinking about all these other organisms for the functions they provide, for the water they clean, for erosion control, for carbon sequestration — we’re going to lose them, and the potential they harbor for regenerating healthy ecosystems in the face of global change.”
Robert and Ardis James created a culture of philanthropy within the James family that has spanned generations
The late Ardis Butler James — an avid quilter and collector of quilts since childhood — and her husband, the late Robert “Bob” James, M.B.A. ’48, Ph.D. ’53, amassed an assortment of quilts that eventually became the largest public collection in the world. In an early example of generosity that would characterize their philanthropy and inspire so many others, Ardis and Bob, both native Nebraskans, donated nearly 1,000 quilts to establish the International Quilt Study Center (now known as the International Quilt Museum) at the University of Nebraska-Lincoln, in 1997. The couple was inducted into the Quilters Hall of Fame in 2011.
The Jameses’ love of quilts — made by piecing together different fabrics to create something beautiful and unified — is a fitting metaphor for their approach to philanthropy. Through their generosity and service to Harvard, the James family has had a profound impact on the entire University community, sewing together the rich variety of Harvard’s constituent Schools and programs to create a unified whole that is greater than the sum of its parts.
Inspired by the quality of his Harvard education, the couple wanted to give back to the institution that had given Bob so much, making their first donation to the Harvard Business School fund. Shortly after, they made a significant contribution to the School to establish the Robert and Ardis James Fellowship, which offered scholarships for Eastern European students in need. These gifts were driven by their unwavering belief in supporting future generations and initiated a lifetime of transformative support.
Recognizing that leaders and change-makers emerge from every corner of the University, Bob and Ardis expanded their philanthropy beyond HBS and the Harvard Kenneth C. Griffin Graduate School of Arts and Sciences, the two Schools that Bob had attended. The James family’s generosity began creating a quilt of a different sort, intentionally integrating various aspects of the University and their own passions to strengthen the institution as a whole, embodying the spirit of “One Harvard” long before President Emerita Drew Gilpin Faust championed this idea during her presidency.
A family tradition
Bob and Ardis instilled their values into their children, Ralph James, M.B.A. ’82, and Cathy James Paglia, M.B.A. ’76, from an early age. Together, two generations of the James family have spread their generosity across the University, including gifts to HBS, Harvard Griffin GSAS, Harvard Kennedy School, Harvard Divinity School, Harvard Radcliffe Institute, the Harvard Art Museums, Harvard Medical School, the Harvard Graduate School of Education, and Harvard T.H. Chan School of Public Health.
Their contributions were pivotal in the recent restoration of HDS’s Swartz Hall, reflecting their belief in the importance of understanding religion to comprehend human motivations and foster a peaceful, just world. Named in the family’s honor, the James Room at Swartz Hall is a new, state-of-the-art space for teaching, learning, and gathering that expands the School’s convening power.
“Whether or not my father knew it or articulated it in this way, he was one of the earliest proponents of One Harvard. My sister and I have embraced it ever since.”
Ralph James, M.B.A. ’82
Having grown up when opportunities for women were scarcer, Ardis was deeply inspired by HRI’s mission, particularly the Schlesinger Library, which illuminates the lives of American women past and present. She believed that to impact the future, one must first understand the past.
Outside of their own philanthropic interests, the James family has enduring faith in Harvard’s leadership to address the biggest challenges in the world today.
“My family’s commitment to philanthropy reflects many things,” Ralph says. “The quality and values of leadership really do matter. Execution matters. You can have a fabulous strategy, but it isn’t going to have the impact you want unless it’s executed properly. Across the board, Harvard exhibits these qualities. They check all the boxes.”
Ralph and Cathy have lent their time and talents in support of the University in numerous other ways. Ralph previously served as HBS executive director for external relations, as HBS executive director of executive education, as co-chair of HGSE’s most recent fundraising campaign, and as a member of the HDS Dean’s Council and the HRI Dean’s Advisory Council. Cathy has co-chaired her HBS class reunion efforts every year since her graduation.
Today, Ralph and Cathy honor their late parents by carrying on their legacy, seeking new ways to support Harvard and adding new patches to the family’s meticulously crafted quilt of philanthropy.
“Everything always evolves. Philanthropy continues to change. My parents had a particular way of viewing their philanthropy, and Cathy and I are continuing that tradition,” Ralph said. “They supported what was important to them, and as we grow older, we bring our own perspectives into family philanthropy with the hope that the next generation will do the same. The beauty of Harvard is that whatever you may be interested in or want to accomplish, Harvard is doing it somewhere on campus.”
Comedy writer Simon Rich talks about turning life into funny fiction, offers tips for young writers
Samantha Laine Perfas
Harvard Staff Writer
8 min read
Simon Rich had a two-book contract from Random House by the time he graduated from Harvard in 2007. Since then, he has written for “Saturday Night Live,” been a showrunner for the cable TV comedy “Man Seeking Woman” (based on his collection “The Last Girlfriend on Earth”), wrote the screenplay for the Seth Rogen film “An American Pickle,” and published numerous pieces in publications such as The New Yorker, Vanity Fair, and McSweeney’s.
His latest collection of short stories (his seventh), “Glory Days,” follows a hilarious cast of characters, including an old man who tells his great-grandson about romance in the age before post-climate change dystopia, a nostalgic participation trophy buried in a landfill under “four hundred tons of Wow potato chips,” and David and Goliath (who, as it turns out, threw the fight).
Rich spoke to the Gazette about his latest work and offered some insight into his creative process — with some tips for young writers. This interview has been edited for length and clarity.
What approach did you take to “Glory Days” and did you feel like it was similar or different than some of your past work?
I would say it’s embarrassingly identical to my last four books, which were all collections of short stories. But thematically, it’s a little bit new for me. The characters are a little older and grappling with higher-stakes dilemmas than some of the protagonists of my previous collections.
When you say, “embarrassingly identical,” what do you mean?
When I was starting out as a writer, I was a lot more experimental in my approach to books. I was still trying to figure out my style and my own sensibility and taste. Around my late 20s, I hit upon a style that I liked, and I think I haven’t creatively grown or changed. It’s safe to say, whatever people thought of the last four books, I imagine they’ll feel very similarly about this one.
“Around my late 20s, I hit upon a style that I liked, and I think I haven’t creatively grown or changed. It’s safe to say, whatever people thought of the last four books, I imagine they’ll feel very similarly about this one.”
You mentioned your protagonists face higher stakes in this collection. A lot of the themes touch on family, parenting, or the pains of getting older. Did you lean into your own experiences to reflect the type of angst your characters encounter?
I think all of my books are really autobiographical, which you wouldn’t necessarily know by reading them. The premises are so surreal; the characters are ridiculous; and their life experiences don’t actually match my own. But on an emotional level, I’m always trying to write stories that that are authentic to what I’m actually experiencing on earth.
My last collection, “New Teeth,” was very much stories about becoming a parent and having children. And this collection is about turning 40 and entering midlife. There are a lot of characters in the book who are grappling with a sense of obsolescence and trying to adapt to a world in which they’re no longer the youngest generation. There are stories about characters who are used to winning, have to come to grips with their own frailty, and hopefully gained a bit of humility.
Let’s talk a little about process. You approach your stories from surprising angles. For example, you’ll take a well-known plot or character and add an unexpected twist, like Mario (from Super Mario Brothers) going through a midlife crisis or the Tooth Fairy being hounded by what sounds like an illegal tooth smuggling ring. When or how do these ideas occur to you?
Since College, I’ve had a family encyclopedia that I’ll flip through for ideas. The one I use now, the Oxford Family Encyclopedia, I’ve owned literally since school, and I would go to Lamont Library and flip through their encyclopedias, magazine collections, and search for evergreen topics that might yield comedic premises.
I avoided newspapers because I didn’t want to do anything too topical; topics were often fraught and had too many satirical associations. I looked at magazines, but eventually landed on children’s encyclopedias because every page is filled with common reference points. So many of my stories have come from just flipping through the pages and being struck by a topic or reference that could be humorously inverted.
Where did you come across this strategy?
It was early freshman year when I was trying out for the Lampoon. I had read an article about The Onion and knew that the way they generated material was by reading the news and taking that day’s headlines and attempting to subvert them. But I knew I didn’t want to do satire. I wanted to pursue more absurdist humor, but at the same time, I didn’t want to be too untethered or unhinged. I was looking for that sweet spot where I could meet the reader in an accessible place but then take them on an interesting journey.
When do you know you’ve landed on something?
The first step is me thinking the concept is funny, but then the second bar it has to clear is: Do I have a funny idea for how to execute it? There are hundreds of premises on my computer that I that I think are intriguing, funny, or interesting, but I don’t actually know how to write them. I don’t know what perspective they should be written from. I don’t have a story. I don’t know who the protagonist ought to be. A lot of times an idea will sit on my computer for many years before I come up with the right narrator, the right point of view, or the right plot.
Was your time at Harvard and writing for the Lampoon a formative time for your development as a writer?
Totally. I wrote my first book in College, and my process was exactly what I just described. I would take my encyclopedia and go to Peet’s Coffee every night and sit at the glass-walled ledge. In longhand, I would write down any premises that occurred to me and then in the mornings go to the Lampoon and write up the ones that felt the most promising.
I basically wrote that first book at the Lampoon and in the computer room in Adams House. I never took any creative writing classes, but there were writers at the Lampoon that I really looked up to.
My favorite was the recently graduated Danny Chun, who would come back and visit. I read everything he wrote for the Lampoon, including stuff that didn’t get into the magazine. He was probably my biggest comedic influence at the time.
Colin Jost was also really encouraging and supportive of younger writers and would always take the time to read what I was working on and give me feedback.
And to be totally honest, I took a lot of classes for materials, specifically subjects that I thought might yield premises. I was so single-minded in my pursuit of original comedic premises that I would take courses purely for that reason. I remember thinking, “If I get a piece out of this, it’s worth it.’
“Even if I put terrible stuff into the world, it would be forgotten. That was a real comfort to me as an undergraduate. I took a lot of risks with my writing.”
Do you have any advice for other young writers?
I remember when I was at Harvard, I fell in love with the writer William Somerset Maugham. I just absolutely flipped for “Of Human Bondage” and “The Razor’s Edge,” which I found at The Coop.
So I went to Lamont and realized he’d written about 100 books. I picked a couple off the shelf at random and found they weren’t as strong as the ones that had been consistently in print for 100 years or so.
I had this epiphany: Nobody remembers your bad stuff. Even if it’s egregiously bad, it just kind of evaporates, like it never existed. People only remember your absolute best stuff.
Realizing that set me free to take a lot of risks and try a lot of different genres and mediums because I knew that nobody was really watching. Even if I put terrible stuff into the world, it would be forgotten. That was a real comfort to me as an undergraduate. I took a lot of risks with my writing. In those early days, I wrote in a lot of genres that I had no business writing in and learned a lot from doing it.
What’s next for you?
I’m doing a show on Broadway in December called “All In: Comedy About Love” directed by Alex Timbers and featuring a rotating cast of actors and comedians — like John Mulaney, Andrew Rannells, Richard Kind, and Chloe Fineman — reading my work. It’s a lot of really funny, talented people and the score is by the Magnetic Fields, who were a big influence on my writing. I’m really excited about it.
Key to negotiated peace in Ukraine? Having the West keep Russia honest.
Former defense minister says U.S., allies need to continue financial, arms aid, remove curbs on missiles to bring Putin to table
Anna Lamb
Harvard Staff Writer
5 min read
There is a path forward to peace in Ukraine, according to the country’s former minister of defense. But it will require concrete, mutually agreed-upon terms with Russia, backed up by support from the U.S. and allies to ensure Moscow doesn’t renege.
Oleksii Reznikov, who served as Ukraine’s minister of defense from 2021 to 2023, paid a visit to the Ukrainian Research Institute on Oct. 16 to discuss the possibility of negotiated peace in the ongoing Russo-Ukrainian war.
The discussion was moderated by Mariana Budjeryn, a senior research associate with the Project on Managing the Atom (MTA) at the Harvard Kennedy School’s Belfer Center. It touched on Ukraine’s requests for more weapons from the U.S. and its allies, the possibility of full NATO membership, and the need for shows of iron-clad Western support as a prod to bring Russia to the table for talks to end to the almost three-year war.
“Crushing Putin’s regime on the battlefield is the best way to launch the transformation of Russia, and that is possible. Ukraine has proven as much.”
Oleksii Reznikov
“Crushing Putin’s regime on the battlefield is the best way to launch the transformation of Russia, and that is possible. Ukraine has proven as much,” Reznikov said. “All we need is the sufficient and timely support without six-month pauses — I mean decision in Congress — and without limitation on how weapons are used.”
The U.S. has thus far restricted the use of long-range missiles it has supplied to Ukraine, which wants to strike targets deep in Russian territory. Russian President Vladimir Putin has warned that he would consider any such attack to be an act of war by the allies.
The U.S. has continued to provide financial and weapons aid to Ukraine throughout the conflict, most recently in the form of a $425 million security package. However, Reznikov said, Russia continues to dominate Ukraine with the number and sophistication of their weapons, along with disinformation campaigns.
“We need a lot of weapons and trained personnel if you don’t want your people to lose lives,” he said. “This is one of the last, if not the last, war where you will see large manpower in direct conduct. War is increasingly turning into a competition between robotic systems as well as automated control systems.”
Reznikov also mentioned Russia’s use of deep fakes, AI, and cyberattacks to spread propaganda and disinformation.
“We have the experience of defending a democratic country against a strong and highly technological army of an autocratic regime,” he said. “The main challenge is the timely recognition of threats when the enemy is actively employing hybrid sub-threshold acts of aggression and is weaponizing everything from freedom of speech to migration streams, food logistic chains, freedom of navigation in a Black Sea, Caspian Sea, etc.”
And if the two countries get to the negotiating table, Reznikov said there must be two basic components if the talks are to be a success: trust and guarantees, backed by threat of enforcement.
Ukraine has had bad experiences when it comes to trust, he said. For example, the 1994 Budapest memorandum with Russia, the U.S., and the U.K. required Ukraine to give up its then-sizable nuclear arsenal (left behind during the breakup of the Soviet Union) in exchange for security guarantees from the three nations.
“We have given up the third-largest nuclear arsenal in the world, our strategic aviation, and the missiles [once part of Ukraine’s nuclear arsenal, now converted to conventional arms] that Russia is now using to kill our children and women in the cities, far from the front,” Reznikov said. “One of the guarantors of our security, Russia, openly attacked us in 2014 and others stood by and did nothing.”
But “there are also success stories,” he said, of long-running hostilities ended through negotiation.
Reznikov points to the Good Friday agreement between the U.K. and Ireland as an example. The accord ended more than 30 years of conflict between the states and helped develop the current system of government operating in Northern Ireland today.
“It was not easy, and it didn’t solve every problem, but it allowed the parties to proceed past a stalemate and to overcome some obstacles,” Reznikov said. “There was a trust. There were guarantees, and there was a model of coexistence.”
In the case of Russia and Ukraine, Reznikov believes the two countries will need allies on either side of the table to ensure guarantees. One option would be membership in NATO for Ukraine, assuring protection by Western allies against future aggression.
“Another option is a bi- or multilateral deal, detailing direct obligations to support Ukraine, should Russia violate its side of the deal,” he said. “Not promises to hold consultations, but concrete steps, weapons, financial support, closing the sky with the air defenses, crushing sanctions against Russia, etc.”
Without the backing of other countries, Russia will continue to be a source of instability within the region, Reznikov said.
He also noted that concerns by the West over Putin’s thinly veiled threats of nuclear attacks over Ukraine areoverblown.
“Russia is developing policies to increase policies and increase the work force. It is not planning to die in nuclear ashes,” he said. “The moment they encounter resistance, they stop and back down. Therefore, the free world needs to set its anxieties aside.”
Mars may have been habitable much more recently than thought
Anne J. Manning
Harvard Staff Writer
4 min read
Study bolsters theory that protective magnetic field supporting life-enabling atmosphere remained in place longer than estimates
Evidence suggests Mars could very well have been teeming with life billions of years ago. Now cold, dry, and stripped of what was once a potentially protective magnetic field, the Red Planet is a kind of forensic scene for scientists investigating whether Mars was indeed once habitable and, if so, when.
The “when” question in particular has driven researchers in Harvard’s Paleomagnetics Lab in the Department of Earth and Planetary Sciences. A new paper in Nature Communications makes their most compelling case to date that Mars’ life-enabling magnetic field could have survived until about 3.9 billion years ago, compared with previous estimates of 4.1 billion years — so hundreds of millions of years more recently.
The study was led by Griffin Graduate School of Arts and Sciences student Sarah Steele, who has used simulation and computer modeling to estimate the age of the Martian “dynamo,” or global magnetic field produced by convection in the planet’s iron core, like on Earth. Together with senior author Roger Fu, the John L. Loeb Associate Professor of the Natural Sciences, the team has doubled down on a theory they first argued last year that the Martian dynamo, capable of deflecting harmful cosmic rays, was around longer than prevailing estimates claim.
Their thinking evolved from experiments simulating cooling and magnetization cycles of huge craters on the Red Planet’s surface. Known to be only weakly magnetic, these well-studied impact basins have led researchers to assume they formed after the dynamo shut down.
This timeline was hypothesized using basic principles of paleomagnetics, or the study of a planet’s prehistoric magnetic field. Scientists know ferromagnetic minerals in rock align themselves with surrounding magnetic fields when the rock is hot, but these small fields become “locked in” once the rock has cooled. This effectively turns the minerals into fossilized magnetic fields, which can be studied billions of years later.
Looking at basins on Mars with weak magnetic fields, scientists surmised they initially formed amid hot rock during a period in which there were no other strong magnetic fields present — in other words, after the planet’s dynamo had gone away.
But the Harvard team says this early shutdown isn’t necessary to explain those largely de-magnetized craters, according to Steele. Rather, they argue that the craters were formed while the dynamo of Mars was experiencing a polarity reversal — north and south poles switching places — which, through computer simulation, can explain why these large impact basins only have weak magnetic signals today. Magnetic pole flips also happen on Earth every few hundred thousand years.
“We are basically showing that there may not have ever been a good reason to assume Mars’ dynamo shut down early,” Steele said.
Their results build on previous work that first upended existing Martian habitability timelines. They used a famed Martian meteorite, Allan Hills 84001, and a powerful quantum diamond microscope in Fu’s lab, to infer a longer-persisting magnetic field until 3.9 billion years ago by studying different magnetic populations in thin slices of the rock.
Steele says poking holes in a long-held theory is a little nerve-wracking, but that they’ve been “spoiled rotten” by a community of planetary researchers who are open to new interpretations and possibilities.
“We are trying to answer primary, important questions about how everything got to be like it is, even why the entire solar system is the way that it is,” Steele said. “Planetary magnetic fields are our best probe to answer a lot of those questions, and one of the only ways we have to learn about the deep interiors and early histories of planets.”
Outdoor physical activity may be a better target for preventive intervention, says researcher
Jacqueline Mitchell
BIDMC Communications
3 min read
A growing body of evidence shows that taking vitamin D supplements does not reduce the risk of cardiac arrest in older adults, according to a new study out of Beth Israel Deaconess Medical Center, a Harvard affiliate.
Cardiovascular disease is the primary cause of death among adults over age 65 years.
“While multiple observational studies have demonstrated a relationship between low vitamin D and high risk for cardiovascular disease, few randomized controlled trials to date have evaluated the role of vitamin D supplementation on cardiovascular disease,” said lead author Katharine W. Rainer, a resident physician at BIDMC. “Our study decisively showed that vitamin D had no effect on the markers of cardiovascular disease over the 2-year follow-up period, regardless of dose. These results reinforce evidence that vitamin D supplementation is not an effective intervention for cardiovascular disease prevention.”
“Our study decisively showed that vitamin D had no effect on the markers of cardiovascular disease over the 2-year follow-up period, regardless of dose.”
Katharine W. Rainer
To evaluate the effect of vitamin D supplementation on the heart, researchers at BIDMC assessed whether higher doses of the vitamin reduced the presence of two specific proteins in the blood known to indicate cardiac injury and strain. The team’s analysis of data from a double-blind, randomized trial — the gold standard of scientific testing that provides the most persuasive results — do not support the use of higher-dose vitamin D supplementation to reduce cardiovascular risk in adults with low blood levels of vitamin D. The study is published in the American Journal of Preventative Cardiology.
Rainer and colleagues analyzed data from a National Institute of Aging-sponsored trial conducted between July 2015 and March 2019. Participants were randomized into one of four groups, receiving either 200, 1,000, 2,000, or 4,000 international units (IUs) per day of vitamin D3 supplementation. Blood levels of the markers of cardiovascular disease were measured at baseline and at three-, 12- and 24-month follow-up visits.
The investigators found that lower vitamin D levels were associated with a baseline elevation in one marker of cardiovascular disease, but failed to reduce either marker of cardiovascular disease over the two-year study period, regardless of dose. The findings were largely consistent regardless of participants’ age, sex, race, or history of cardiovascular disease, including high blood pressure and/or diabetes.
“While much work is needed to understand why vitamin D deficiency is associated with CVD, our study adds to the growing body of evidence that daily or monthly supplementation with vitamin D does not prevent CVD events or reduce markers of subclinical cardiac injury or strain,” said corresponding and senior author Stephen P. Juraschek, research director of the Hypertension Center at BIDMC. “Instead, there may be other factors upstream to vitamin D and CVD (such as outdoor physical activity, for example) that may be a better target for preventive interventions.”
Co-authors included William Earle of BIDMC.
This work was supported by the National Institutes of Health/National Heart, Lung and Blood Institute (grants NIH/ NHLBI 7K23HL135273 and 3K23HL135273S1); the National Institute on Aging (grants U01AG047837 and K01 AG076967); the Office of Dietary Supplements; the Mid-Atlantic Nutrition Obesity Research Center (grant P30DK072488); and the Johns Hopkins Institute for Clinical and Translation Research (grant UL1TR003098). STURDY is registered on clinicaltrials.gov under identifier NCT02166333
The award supports independent research in the arts and humanities at the American Academy in Rome. Both Princeton recipients are undergraduate alumni.
Singapore is prepared to facilitate Asia’s transition to a low-carbon future. But Singapore cannot do this alone, it will work with like-minded partners. This was a point underscored by Mr Ravi Menon (Arts and Social Sciences ‘87), Singapore’s first Ambassador for Climate Action and member of NUS Board of Trustees, at the inaugural NUS Environmental Management Leadership Lecture (EMLL), on 14 September 2024.
Highlighting the urgency of preparing for a climate-impaired world, Mr Menon noted that climate-related disasters in 2022 affected over 52 million people and led to US$36 billion worth of damages.
At the same time, decarbonisation efforts are also on the rise in many countries.
“Global momentum on climate action is picking up because people are beginning to see what is happening,” he added, emphasising that the world will continue to face the dual realities of being climate-impaired while striving for low carbon emissions.
Mr Menon addressed over 150 NUS students, alumni, staff and industry partners at the lecture titled “Preparing for a Low-Carbon and Climate-Impaired World” which kickstarted the Highlight edition of this year’s NUS Sustainability CONNECT. He is also Chairman of the Glasgow Financial Alliance for Net Zero (GFANZ) Asia-Pacific Advisory Board and a member of the GFANZ Principals Group.
Organised by the NUS School of Continuing and Lifelong Education under the Master of Science (Environmental Management) programme, the annual EMLL series aims to facilitate interdisciplinary discussion about key issues and new developments in environmental management. Themed “Transformative Leadership in Climate Action: Navigating Challenges, Harnessing Innovation”, the series explores the role of transformative leadership as the driving force behind successful environmental management efforts, providing insights on how visionary and innovative leaders overcome obstacles and leverage new technologies to create a sustainable future.
Singapore’s triple transition to thrive in a low-carbon world
Singapore aims to achieve its long-term net-zero emissions aspiration by 2050, with intermediate targets across sectors as outlined in the Singapore Green Plan 2030. Referring to the changes as “complex and not easy, but necessary”, Mr Menon highlighted that the first change is a carbon transition which involves reducing our primary emissions to net-zero across sectors including industry, transport, and households.
Next is the energy transition which entails progressively decarbonising Singapore’s electricity grid while ensuring that it remains resilient. This requires striking a balance across energy security, affordability and sustainability. While Singapore’s unique geographical constraints make it difficult to harness certain forms of renewable energy, the nation can tap on the “four switches” to progressively decarbonise the grid – namely natural gas, solar, electricity imports from regional power grids, and low-carbon alternatives.
Mr Menon also noted that nuclear energy is another energy source the government has been looking at. However, he cautioned that there must first be public acceptance and adequate safety measures.
Lastly, an economic transition will also be necessary. He urged participants to “grow the green and green the brown”, referring to how Singapore must grasp green growth opportunities, transform carbon-intensive sectors, and turn being low-carbon into a competitive advantage.
To prepare for a climate-impaired world, Singapore must not only strive to reduce our emissions but also adapt to the direct impacts of climate change and its potential knock-on effects. Singapore has started implementing various measures to strengthen coastal protection, flood resilience, heat resilience, and food security. The government is also conducting in-depth studies to plan ahead and develop solutions in these areas.
“It will take a whole-of-nation effort,” he said. Significant scientific and technological advances will be necessary – which is where universities and academics can contribute – but individuals and communities must do their part too, he added.
Leading the way in transiting to a low-carbon future
Climate action must go beyond our borders, noted Mr Menon, with a nod to the ASEAN Power Grid initiative that is currently underway.
Having a system of cross-border power connections will allow renewable energy from typically remote generation sites to reach population centres across the region. This will spur decarbonisation in the region and enable the free trade of clean energy.
The city state can also drive green efforts in the region through blended finance, where governments invest in climate projects to reduce risk and make it easier for private entities to invest. Under the Financing Asia’s Transition Partnerships initiative, Singapore aims to raise US$5 billion with international partners to finance the effort to decarbonise the region.
The lecture was capped by an engaging Question-and-Answer segment that was facilitated by Professor Benjamin Cashore, Li Ka Shing Professor in Public Management and Director at the Institute for Environment and Sustainability.
During the segment, Mr Menon fielded questions about green finance and the need for a just transition in Asia, where countries should not have to forgo development for the sake of decarbonisation.
Noting that climate change is a “classic problem of collective failure”, he emphasised the need for collaboration to resolve the issue.
“There isn’t going to be one global leader that will solve the climate problem. We should look at where we can seize leadership at various levels,” he said. One way to exercise leadership in science and technology is by pivoting resources and capabilities towards developing climate solutions. In finance, financial institutions can exercise leadership through working with clients and customers to channel finances to activities that reduce emissions.
While strong government support is needed, it is only possible if it is supported by the rest of society. Community efforts and collective individual actions can signal demand for greener products and support for decarbonisation. “No one acting alone can solve the problem…we need collective action,” he concluded.
As the metal artist in residence and technical instructor in MIT’s Department of Materials Science and Engineering (DMSE), Rhea Vedro operates in a synthesis of realms that broadens and enriches the student experience at MIT.
“Across MIT,” she says, “people in the arts, humanities, and sciences come together, and as soon as there’s opportunity to talk, sparks fly with all of the cross-pollination that is possible. It’s a rich place to be, and an exciting opportunity to work with our students in that way.”
In 2022, when Vedro read the job description for her current position at MIT, she says it resonated deeply with her interests and experiences. An outgrowth of MIT’s strong tradition of “mens et manus” (“mind and hand”), the position fused seamlessly with her own background.
“It was like I had written it myself. I couldn’t believe the position existed,” Vedro says.
Vedro’s relationship with metals had begun early. Even as a child growing up in Madison, Wisconsin, she collected minerals and bits of metal — and was in heaven when her godmother in New York City would take her to the Garment District, where she delightedly dug through wholesale bins of jewelry elements.
“I believe that people are called to different mediums,” she says. “Artists are often called to work with wood or clay or paper. And while I love all of those, metal has always been my home.”
After earning a master of fine arts in metals at the State University of New York at New Paltz, Vedro combined her art practice over the years with community work, as well as with an academic pursuit into metalsmithing history. “Through material culture, anthropology, and archeology, you can trace civilizations by how they related to this material.”
Vedro teaches classes 3.093 (Metalsmithing: Objects and Power), 3.095 (Introduction to Metalsmithing), and 4:A02 (DesignPlus: Exploring Design), where students learn techniques like soldering, casting, and etching, and explore metalsmithing through a cultural lens.
“In my class, we look at objects like the tool, the badge, the ring, the crown, the amulet, armor in relationship to the body and power,” Vedro says.
Vedro also supports the lab sections of class 3.094 (Materials in Human Experience), an experiential investigation into early techniques for developing cementitious materials and smelting iron, with an eye toward the future of these technologies.
Explaining her own artistic journey, which has taken her all over the world, Vedro says the “through-line” of her practice involves the idea of transformation, via the physical process of her hands-on work as a metalsmith, a fascination with materiality, and her community work to “transform lives through the art of making something.”
Such transformation is demonstrated in her ongoing commission by the City of Boston Mayor’s Office of Arts and Culture, entitled Amulet, which invited the public to community workshops, and to Vedro’s “Workbench” positioned by the waterfront in East Boston, to use metal tools of the trade. Each participant made their own mark on sheets of metal, asked to act with an intention or wish for safe passage of a loved one or for one’s own journey. Vedro will fashion the sheets, bearing the “wishmarks” of so many community members into several 16-to-17-foot birds, positioning them to stand guard at Boston City Hall Plaza.
At MIT, students come to the DMSE’s Merton C. Flemings Materials Processing Laboratory to work on creative projects in fine metals and steel, and also to craft parts for highly technical research in a wide range of fields, from mechanical engineering to aeronautics and astronautics.
“Students will come proposing to make a custom battery housing, a coil for a project going into outer space, a foundry experiment, or to etch and polish one crystal of aluminum,” Vedro says. “These are very specific requests that are not artistic in their origin and rely upon the hands-on metalsmithing of my team, including Mike Tarkanian [DMSE senior lecturer], James Hunter, [DMSE lecturer], Shaymus Hudson [DSME technical instructor], and Christopher Di Perna, [DSME technical instructor]."
Whatever the students’ inspiration, Vedro says she is struck by how motivated they are to do their best work — even despite the setbacks and time required that are part of developing a new skill.
“Everyone here is intensely driven,” she says, adding that many students, perhaps because of their familiarity with the scientific process, “are really good at taking quote-unquote failures as part of their learning process.”
Throughout their exploration in the lab, otherwise known as the Forge/Foundry, many students discover the power of working with their hands.
“There is a zone you get into, where you are becoming one with what you’re doing and lose track of time, and you are only paying attention to how material is behaving under your hand,” Vedro says.
Sometimes the zone produces not only a fine piece of metalwork, but an inspiration about something unrelated, such as a new approach to a research project.
“It frees up the mind, just like when you’re sleeping and you process things you studied the night before,” Vedro says. “You can be working with your hands on something, and many other ideas come together.”
Asked whether 15 years ago she would have thought she’d be working at MIT, Vedro says, “Oh, no. My professional life has been such an incredible braid of different experiences. It’s a reminder to stay true to your unique journey, because you can be like me — in a place I would never have anticipated, where I feel energized every day to come in and see what will cross my path.”
The research team, led by the University of Cambridge, adapted an algorithm originally designed for humans and found it could automatically detect and grade heart murmurs in dogs, based on audio recordings from digital stethoscopes. In tests, the algorithm detected heart murmurs with a sensitivity of 90%, a similar accuracy to expert cardiologists.
Heart murmurs are a key indicator of mitral valve disease, the most common heart condition in adult dogs. Roughly one in 30 dogs seen by a veterinarian has a heart murmur, although the prevalence is higher in small breed dogs and older dogs.
Since mitral valve disease and other heart conditions are so common in dogs, early detection is crucial as timely medication can extend their lives. The technology developed by the Cambridge team could offer an affordable and effective screening tool for primary care veterinarians, and improve quality of life for dogs. The results are reported in the Journal of Veterinary Internal Medicine.
“Heart disease in humans is a huge health issue, but in dogs it’s an even bigger problem,” said first author Dr Andrew McDonald from Cambridge’s Department of Engineering. “Most smaller dog breeds will have heart disease when they get older, but obviously dogs can’t communicate in the same way that humans can, so it’s up to primary care vets to detect heart disease early enough so it can be treated.”
Professor Anurag Agarwal, who led the research, is a specialist in acoustics and bioengineering. “As far as we’re aware, there are no existing databases of heart sounds in dogs, which is why we started out with a database of heart sounds in humans,” he said. “Mammalian hearts are fairly similar, and when things go wrong, they tend to go wrong in similar ways.”
The researchers started with a database of heart sounds from about 1000 human patients and developed a machine learning algorithm to replicate whether a heart murmur had been detected by a cardiologist. They then adapted the algorithm so it could be used with heart sounds from dogs.
The researchers gathered data from almost 800 dogs who were undergoing routine heart examination at four veterinary specialist centres in the UK. All dogs received a full physical examination and heart scan (echocardiogram) by a cardiologist to grade any heart murmurs and identify cardiac disease, and heart sounds were recorded using an electronic stethoscope. By an order of magnitude, this is the largest dataset of dog heart sounds ever created.
“Mitral valve disease mainly affects smaller dogs, but to test and improve our algorithm, we wanted to get data from dogs of all shapes, sizes and ages,” said co-author Professor Jose Novo Matos from Cambridge’s Department of Veterinary Medicine, a specialist in small animal cardiology. “The more data we have to train it, the more useful our algorithm will be, both for vets and for dog owners.”
The researchers fine-tuned the algorithm so it could both detect and grade heart murmurs based on the audio recordings, and differentiate between murmurs associated with mild disease and those reflecting advanced heart disease that required further treatment.
“Grading a heart murmur and determining whether the heart disease needs treatment requires a lot of experience, referral to a veterinary cardiologist, and expensive specialised heart scans,” said Novo Matos. “We want to empower general practitioners to detect heart disease and assess its severity to help owners make the best decisions for their dogs.”
Analysis of the algorithm’s performance found it agreed with the cardiologist’s assessment in over half of cases, and in 90% of cases, it was within a single grade of the cardiologist’s assessment. The researchers say this is a promising result, as it is common for there to be significant variability in how different vets grade heart murmurs.
“The grade of heart murmur is a useful differentiator for determining next steps and treatments, and we’ve automated that process,” said McDonald. “For vets and nurses without as much stethoscope skill, and even those who are incredibly skilled with a stethoscope, we believe this algorithm could be a highly valuable tool.”
In humans with valve disease, the only treatment is surgery, but for dogs, effective medication is available. “Knowing when to medicate is so important, in order to give dogs the best quality of life possible for as long as possible,” said Agarwal. “We want to empower vets to help make those decisions.”
“So many people talk about AI as a threat to jobs, but for me, I see it as a tool that will make me a better cardiologist,” said Novo Matos. “We can’t perform heart scans on every dog in this country – we just don’t have enough time or specialists to screen every dog with a murmur. But tools like these could help vets and owners, so we can quickly identify those dogs who are most in need of treatment.”
The research was supported in part by the Kennel Club Charitable Trust, the Medical Research Council, and Emmanuel College Cambridge.
Researchers have developed a machine learning algorithm to accurately detect heart murmurs in dogs, one of the main indicators of cardiac disease, which affects a large proportion of some smaller breeds such as King Charles Spaniels.
The research team, led by the University of Cambridge, adapted an algorithm originally designed for humans and found it could automatically detect and grade heart murmurs in dogs, based on audio recordings from digital stethoscopes. In tests, the algorithm detected heart murmurs with a sensitivity of 90%, a similar accuracy to expert cardiologists.
Heart murmurs are a key indicator of mitral valve disease, the most common heart condition in adult dogs. Roughly one in 30 dogs seen by a veterinarian has a heart murmur, although the prevalence is higher in small breed dogs and older dogs.
Since mitral valve disease and other heart conditions are so common in dogs, early detection is crucial as timely medication can extend their lives. The technology developed by the Cambridge team could offer an affordable and effective screening tool for primary care veterinarians, and improve quality of life for dogs. The results are reported in the Journal of Veterinary Internal Medicine.
“Heart disease in humans is a huge health issue, but in dogs it’s an even bigger problem,” said first author Dr Andrew McDonald from Cambridge’s Department of Engineering. “Most smaller dog breeds will have heart disease when they get older, but obviously dogs can’t communicate in the same way that humans can, so it’s up to primary care vets to detect heart disease early enough so it can be treated.”
Professor Anurag Agarwal, who led the research, is a specialist in acoustics and bioengineering. “As far as we’re aware, there are no existing databases of heart sounds in dogs, which is why we started out with a database of heart sounds in humans,” he said. “Mammalian hearts are fairly similar, and when things go wrong, they tend to go wrong in similar ways.”
The researchers started with a database of heart sounds from about 1000 human patients and developed a machine learning algorithm to replicate whether a heart murmur had been detected by a cardiologist. They then adapted the algorithm so it could be used with heart sounds from dogs.
The researchers gathered data from almost 800 dogs who were undergoing routine heart examination at four veterinary specialist centres in the UK. All dogs received a full physical examination and heart scan (echocardiogram) by a cardiologist to grade any heart murmurs and identify cardiac disease, and heart sounds were recorded using an electronic stethoscope. By an order of magnitude, this is the largest dataset of dog heart sounds ever created.
“Mitral valve disease mainly affects smaller dogs, but to test and improve our algorithm, we wanted to get data from dogs of all shapes, sizes and ages,” said co-author Professor Jose Novo Matos from Cambridge’s Department of Veterinary Medicine, a specialist in small animal cardiology. “The more data we have to train it, the more useful our algorithm will be, both for vets and for dog owners.”
The researchers fine-tuned the algorithm so it could both detect and grade heart murmurs based on the audio recordings, and differentiate between murmurs associated with mild disease and those reflecting advanced heart disease that required further treatment.
“Grading a heart murmur and determining whether the heart disease needs treatment requires a lot of experience, referral to a veterinary cardiologist, and expensive specialised heart scans,” said Novo Matos. “We want to empower general practitioners to detect heart disease and assess its severity to help owners make the best decisions for their dogs.”
Analysis of the algorithm’s performance found it agreed with the cardiologist’s assessment in over half of cases, and in 90% of cases, it was within a single grade of the cardiologist’s assessment. The researchers say this is a promising result, as it is common for there to be significant variability in how different vets grade heart murmurs.
“The grade of heart murmur is a useful differentiator for determining next steps and treatments, and we’ve automated that process,” said McDonald. “For vets and nurses without as much stethoscope skill, and even those who are incredibly skilled with a stethoscope, we believe this algorithm could be a highly valuable tool.”
In humans with valve disease, the only treatment is surgery, but for dogs, effective medication is available. “Knowing when to medicate is so important, in order to give dogs the best quality of life possible for as long as possible,” said Agarwal. “We want to empower vets to help make those decisions.”
“So many people talk about AI as a threat to jobs, but for me, I see it as a tool that will make me a better cardiologist,” said Novo Matos. “We can’t perform heart scans on every dog in this country – we just don’t have enough time or specialists to screen every dog with a murmur. But tools like these could help vets and owners, so we can quickly identify those dogs who are most in need of treatment.”
The research was supported in part by the Kennel Club Charitable Trust, the Medical Research Council, and Emmanuel College Cambridge.
Researchers have developed a machine learning algorithm to accurately detect heart murmurs in dogs, one of the main indicators of cardiac disease, which affects a large proportion of some smaller breeds such as King Charles Spaniels.
Awardees include Professor Frances Ross; Professor Vladan Vuletić, graduate student Jiliang Hu ’19, PhD ’24; as well as 10 alumni. New APS Fellows include Professor Joseph Checkelsky, Senior Researcher John Chiaverini, Associate Professor Areg Danagoulian, Professor Ruben Juanes, and seven alumni.
Ross uses transmission electron microscopy to watch crystals as they grow and react under different conditions, including both liquid and gaseous environments. The microscopy techniques developed over Ross’ research career help in exploring growth mechanisms during epitaxy, catalysis, and electrochemical deposition, with applications in microelectronics and energy storage. Ross’ research group continues to develop new microscopy instrumentation to enable deeper exploration of these processes.
Vladan Vuletić, the Lester Wolfe Professor of Physics,received the 2025 Arthur L. Schawlow Prize in Laser Science “for pioneering work on spin squeezing for optical atomic clocks, quantum nonlinear optics, and laser cooling to quantum degeneracy.” Vuletić’s research includes ultracold atoms, laser cooling, large-scale quantum entanglement, quantum optics, precision tests of physics beyond the Standard Model, and quantum simulation and computing with trapped neutral atoms.
Jiliang Hu received the 2024 Award for Outstanding Doctoral Thesis Research in Biological Physics “for groundbreaking biophysical contributions to microbial ecology that bridge experiment and theory, showing how only a few coarse-grained features of ecological networks can predict emergent phases of diversity, dynamics, and invasibility in microbial communities.”
Hu is working in PhD advisor Professor Jeff Gore’s lab. He is interested in exploring the high-dimensional dynamics and emergent phenomena of complex microbial communities. In his first project, he demonstrated that multi-species communities can be described by a phase diagram as a function of the strength of interspecies interactions and the diversity of the species pool. He is now studying alternative stable states and the role of migration in the dynamics and biodiversity of metacommunities.
Alumni receiving awards:
Riccardo Betti PhD ’92 is the 2024 recipient of the John Dawson Award in Plasma Physics“for pioneering the development of statistical modeling to predict, design, and analyze implosion experiments on the 30kJ OMEGA laser, achieving hot spot energy gains above unity and record Lawson triple products for direct-drive laser fusion.”
Javier Mauricio Duarte ’10 received the 2024 Henry Primakoff Award for Early-Career Particle Physics “for accelerating trigger technologies in experimental particle physics with novel real-time approaches by embedding artificial intelligence and machine learning in programmable gate arrays, and for critical advances in Higgs physics studies at the Large Hadron Collider in all-hadronic final states.”
Richard Furnstahl ’18 is the 2025 recipient of the Feshbach Prize Theoretical Nuclear Physics “for foundational contributions to calculations of nuclei, including applying the Similarity Renormalization Group to the nuclear force, grounding nuclear density functional theory in those forces, and using Bayesian methods to quantify the uncertainties in effective field theory predictions of nuclear observables.”
Harold Yoonsung Hwang ’93, SM ’93 is the 2024 recipient of the James C. McGroddy Prize for New Materials“for pioneering work in oxide interfaces, dilute superconductivity in heterostructures, freestanding oxide membranes, and superconducting nickelates using pulsed laser deposition, as well as for significant early contributions to the physics of bulk transition metal oxides.”
James P. Knauer ’72 received the2024 John Dawson Award in Plasma Physics“for pioneering the development of statistical modeling to predict, design, and analyze implosion experiments on the 30kJ OMEGA laser, achieving hot spot energy gains above unity and record Lawson triple products for direct-drive laser fusion.”
Sekazi Mtingwa ’71is the2025 recipient of the John Wheatley Award “for exceptional contributions to capacity building in Africa, the Middle East, and other developing regions, including leadership in training researchers in beamline techniques at synchrotron light sources and establishing the groundwork for future facilities in the Global South.
Charles E. Sing PhD ’12 received the 2024 John H. Dillon Medal “for pioneering advances in polyelectrolyte phase behavior and polymer dynamics using theory and computational modeling.”
Wennie Wang ’13 is the 2025 recipient of the Maria Goeppert Mayer Award “for outstanding contributions to the field of materials science, including pioneering research on defective transition metal oxides for energy sustainability, a commitment to broadening participation of underrepresented groups in computational materials science, and leadership and advocacy in the scientific community.”
APS Fellows
Joseph Checkelsky, theMitsui Career Development Associate Professor of Physics, received the 2024 Division of Condensed Matter Physics Fellowship “for pioneering contributions to the synthesis and study of quantum materials, including kagome and pyrochlore metals and natural superlattice compounds.”
Affiliated with the MIT Materials Research Laboratoryand theMIT Center for Quantum Engineering, Checkelsky is working at the intersection of materials synthesis and quantum physics to discover new materials and physical phenomena to expand the boundaries of understanding of quantum mechanical condensed matter systems, as well as open doorways to new technologies by realizing emergent electronic and magnetic functionalities. Research in Checkelsky’s lab focuses on the study of exotic electronic states of matter through the synthesis, measurement, and control of solid-state materials. His research includes studying correlated behavior in topologically nontrivial materials, the role of geometrical phases in electronic systems, and novel types of geometric frustration.
John Chiaverini, a senior staff member in the Quantum Information and Integrated Nanosystems group and an MIT principal investigator in RLE, was elected a 2024 Fellow of the American Physical Society in the Division of Quantum Information “for pioneering contributions to experimental quantum information science, including early demonstrations of quantum algorithms, the development of the surface-electrode ion trap, and groundbreaking work in integrated photonics for trapped-ion quantum computation.”
Chiaverini is pursuing research in quantum computing and precision measurement using individual atoms. Currently, Chiaverini leads a team developing novel technologies for control of trapped-ion qubits, including trap-integrated optics and electronics; this research has the potential to allow scaling of trapped-ion systems to the larger numbers of ions needed for practical applications while maintaining high levels of control over their quantum states. He and the team are also exploring new techniques for the rapid generation of quantum entanglement between ions, as well as investigating novel encodings of quantum information that have the potential to yield higher-fidelity operations than currently available while also providing capabilities to correct the remaining errors.
Areg Danagoulian, associate professor of nuclear science and engineering, received the 2024 Forum on Physics and Society Fellowship “for seminal technological contributions in the field of arms control and cargo security, which significantly benefit international security.”
His current research interests focus on nuclear physics applications in societal problems, such as nuclear nonproliferation, technologies for arms control treaty verification, nuclear safeguards, and cargo security. Danagoulian also serves as the faculty co-director for MIT’s MISTI Eurasia program.
Ruben Juanes, professor of civil and environmental engineering and earth, atmospheric and planetary sciences (CEE/EAPS) received the 2024 Division of Fluid Dynamics Fellowship “for fundamental advances — using experiments, innovative imaging, and theory — in understanding the role of wettability for controlling the dynamics of fluid displacement in porous media and geophysical flows, and exploiting this understanding to optimize.”
An expert in the physics of multiphase flow in porous media, Juanes uses a mix of theory, computational, and real-life experiments to establish a fundamental understanding of how different fluids such as oil, water, and gas move through rocks, soil, or underwater reservoirs to solve energy and environmental-driven geophysical problems. His major contributions have been in developing improved safety and effectiveness of carbon sequestration, advanced understanding of fluid interactions in porous media for energy and environmental applications, imaging and computational techniques for real-time monitoring of subsurface fluid flows, and insights into how underground fluid movement contributes to landslides, floods, and earthquakes.
Alumni receiving fellowships:
Constantia Alexandrou PhD ’85 is the2024 recipient of theDivision of Nuclear Physics Fellowship“for the pioneering contributions in calculating nucleon structure observables using lattice QCD.”
Daniel Casey PhD ’12 received the 2024 Division of Plasma Physics Fellowship “for outstanding contributions to the understanding of the stagnation conditions required to achieve ignition.”
Maria K. Chan PhD ’09 is the 2024 recipient of the Topical Group on Energy Research and Applications Fellowship “for contributions to methodological innovations, developments, and demonstrations toward the integration of computational modeling and experimental characterization to improve the understanding and design of renewable energy materials.”
David Humphreys ’82, PhD ’91 received the 2024 Division of Plasma Physics Fellowship“for sustained leadership in developing the field of model-based dynamic control of magnetically confined plasmas, and for providing important and timely contributions to the understanding of tokamak stability, disruptions, and halo current physics.
Eric Torrence PhD ’97 received the 2024 Division of Particles and Fields Fellowship“for significant contributions with the ATLAS and FASER Collaborations, particularly in the searches for new physics, measurement of the LHC luminosity, and for leadership in the operations of both experiments.”
Tiffany S. Santos ’02, PhD ’07 is the 2024 recipient of the Topical Group on Magnetism and Its Applications Fellowship “for innovative contributions in synthesis and characterization of novel ultrathin magnetic films and interfaces, and tailoring their properties for optimal performance, especially in magnetic data storage and spin-transport devices.”
Lei Zhou ’14, PhD ’19 received the 2024 Forum on Industrial and Applied Physics Fellowship “for outstanding and sustained contributions to the fields of metamaterials, especially for proposing metasurfaces as a bridge to link propagating waves and surface waves.”
Patients with late-stage cancer often have to endure multiple rounds of different types of treatment, which can cause unwanted side effects and may not always help.
In hopes of expanding the treatment options for those patients, MIT researchers have designed tiny particles that can be implanted at a tumor site, where they deliver two types of therapy: heat and chemotherapy.
This approach could avoid the side effects that often occur when chemotherapy is given intravenously, and the synergistic effect of the two therapies may extend the patient’s lifespan longer than giving one treatment at a time. In a study of mice, the researchers showed that this therapy completely eliminated tumors in most of the animals and significantly prolonged their survival.
“One of the examples where this particular technology could be useful is trying to control the growth of really fast-growing tumors,” says Ana Jaklenec, a principal investigator at MIT’s Koch Institute for Integrative Cancer Research. “The goal would be to gain some control over these tumors for patients that don't really have a lot of options, and this could either prolong their life or at least allow them to have a better quality of life during this period.”
Jaklenec is one of the senior authors of the new study, along with Angela Belcher, the James Mason Crafts Professor of Biological Engineering and Materials Science and Engineering and a member of the Koch Institute, and Robert Langer, an MIT Institute Professor and member of the Koch Institute. Maria Kanelli, a former MIT postdoc, is the lead author of the paper, which appears today in the journal ACS Nano.
Dual therapy
Patients with advanced tumors usually undergo a combination of treatments, including chemotherapy, surgery, and radiation. Phototherapy is a newer treatment that involves implanting or injecting particles that are heated with an external laser, raising their temperature enough to kill nearby tumor cells without damaging other tissue.
Current approaches to phototherapy in clinical trials make use of gold nanoparticles, which emit heat when exposed to near-infrared light.
The MIT team wanted to come up with a way to deliver phototherapy and chemotherapy together, which they thought could make the treatment process easier on the patient and might also have synergistic effects. They decided to use an inorganic material called molybdenum sulfide as the phototherapeutic agent. This material converts laser light to heat very efficiently, which means that low-powered lasers can be used.
To create a microparticle that could deliver both of these treatments, the researchers combined molybdenum disulfide nanosheets with either doxorubicin, a hydrophilic drug, or violacein, a hydrophobic drug. To make the particles, molybdenum disulfide and the chemotherapeutic are mixed with a polymer called polycaprolactone and then dried into a film that can be pressed into microparticles of different shapes and sizes.
For this study, the researchers created cubic particles with a width of 200 micrometers. Once injected into a tumor site, the particles remain there throughout the treatment. During each treatment cycle, an external near-infrared laser is used to heat up the particles. This laser can penetrate to a depth of a few millimeters to centimeters, with a local effect on the tissue.
“The advantage of this platform is that it can act on demand in a pulsatile manner,” Kanelli says. “You administer it once through an intratumoral injection, and then using an external laser source you can activate the platform, release the drug, and at the same time achieve thermal ablation of the tumor cells.”
To optimize the treatment protocol, the researchers used machine-learning algorithms to figure out the laser power, irradiation time, and concentration of the phototherapeutic agent that would lead to the best outcomes.
That led them to design a laser treatment cycle that lasts for about three minutes. During that time, the particles are heated to about 50 degrees Celsius, which is hot enough to kill tumor cells. Also at this temperature, the polymer matrix within the particles begins to melt, releasing some of the chemotherapy drug contained within the matrix.
“This machine-learning-optimized laser system really allows us to deploy low-dose, localized chemotherapy by leveraging the deep tissue penetration of near-infrared light for pulsatile, on-demand photothermal therapy. This synergistic effect results in low systemic toxicity compared to conventional chemotherapy regimens,” says Neelkanth Bardhan, a Break Through Cancer research scientist in the Belcher Lab, and second author of the paper.
Eliminating tumors
The researchers tested the microparticle treatment in mice that were injected with an aggressive type of cancer cells from triple-negative breast tumors. Once tumors formed, the researchers implanted about 25 microparticles per tumor, and then performed the laser treatment three times, with three days in between each treatment.
“This is a powerful demonstration of the usefulness of near-infrared-responsive material systems,” says Belcher, who, along with Bardhan, has previously worked on near-infrared imaging systems for diagnostic and treatment applications in ovarian cancer. “Controlling the drug release at timed intervals with light, after just one dose of particle injection, is a game changer for less painful treatment options and can lead to better patient compliance.”
In mice that received this treatment, the tumors were completely eradicated, and the mice lived much longer than those that were given either chemotherapy or phototherapy alone, or no treatment. Mice that underwent all three treatment cycles also fared much better than those that received just one laser treatment.
The polymer used to make the particles is biocompatible and has already been FDA-approved for medical devices. The researchers now hope to test the particles in larger animal models, with the goal of eventually evaluating them in clinical trials. They expect that this treatment could be useful for any type of solid tumor, including metastatic tumors.
The research was funded by the Bodossaki Foundation, the Onassis Foundation, a Mazumdar-Shaw International Oncology Fellowship, a National Cancer Institute Fellowship, and the Koch Institute Support (core) Grant from the National Cancer Institute.
Weight-loss surgery down 25 percent as anti-obesity drug use soars
Study authors call for more research examining how trend affects long-term patient outcomes
BWH Communications
4 min read
A new study examining a large sample of privately insured patients with obesity found that use of drugs such as Ozempic and Wegovy as anti-obesity medications more than doubled from 2022 to 2023. During that same period, there was a 25.6 percent decrease in patients undergoing metabolic bariatric surgery to treat obesity.
The study, by researchers at Brigham and Women’s Hospital, in collaboration with researchers at Harvard T.H. Chan School of Public Health and the Brown School of Public Health, is published in JAMA Network Open.
“Our study provides one of the first national estimates of the decline in utilization of bariatric metabolic surgery among privately insured patients corresponding to the rising use of blockbuster GLP-1 RA drugs,” said senior author Thomas C. Tsai, a metabolic bariatric surgeon at Brigham and Women’s Hospital.
Using a national sample of medical insurance claims data from more than 17 million privately insured adults, the researchers identified patients with a diagnosis of obesity without diabetes in 2022-2023. The study found a sharp increase in the share of patients who received glucagon-like peptide-1 receptor agonists, or GLP-1 RAs, during the study period, with GLP-1 RA use increasing 132.6 percent from the last six months of 2022 to the last six months of 2023 (from 1.89 to 4.41 patients per 1,000 patients). Meanwhile, there was a 25.6 percent decrease in use of bariatric metabolic surgery during the same period (from 0.22 to 0.16 patients per 1,000 patients).
Among the sample of patients with obesity, 94.7 percent received neither form of treatment during the study period (while 5 percent received GLP-1 RAs and 0.3 percent received surgery). Compared to patients who were prescribed GLP-1 RAs, patients who underwent surgery tended to be more medically complex.
“For now, metabolic bariatric surgery remains the most effective and durable treatment for obesity. National efforts should focus on improving access to obesity treatment — whether pharmacologic or surgical — to ensure patients can receive optimal care,” said Tsai, who is also an assistant professor of surgery at Harvard Medical School and an assistant professor in health policy and management at Harvard T.H Chan School of Public Health.
Tsai notes that while GLP-1 RAs can effectively treat obesity and related conditions (such as diabetes), these medications have been limited by high costs, limited supply, and gastrointestinal side effects that may prompt treatment cessation and subsequent weight regain.
“As patients with obesity increasingly rely on GLP-1s instead of surgical intervention, further research is needed to assess the impact of this shift from surgical to pharmacologic treatment of obesity on long-term patient outcomes,” Tsai said. “With the national decline in utilization of metabolic bariatric surgery and potential closure of bariatric surgery programs, there is a concern that access to comprehensive multidisciplinary treatment of obesity involving pharmacologic, endoscopic, or surgical interventions may become more limited.”
“These results also highlight an opportunity to further expand uptake of surgical and pharmacologic treatments for obesity and related comorbidities,” said co-author Ateev Mehrotra, chair of the Department of Health Services, Policy and Practice at the Brown University School of Public Health. “Metabolic bariatric surgery and GLP-1 RAs are both effective interventions for patients with obesity, yet less than 6 percent of patients in our study received either form of treatment.”
Considering these results, the authors encourage clinicians and policymakers to continue to monitor access to effective obesity treatment amid a rapidly evolving landscape of treatment options. In addition, further research is needed to understand the tradeoffs between use of surgical intervention and increasingly popular GLP-1 RAs to treat obesity.
Funding/disclosures: Tsai reported receiving grants from the National Center for Advancing Translational Sciences, National Institutes of Health to Harvard Catalyst, the Harvard Clinical and Translational Science Center, and financial contributions from Harvard University and its affiliated academic health care centers.
What is it like to give birth on Mars? Can bioengineer TikTok stars win at the video game “Super Smash Brothers” while also answering questions about science? How do sheep, mouse, and human brains compare? These questions and others were asked last month when more than 50,000 visitors from across Cambridge, Massachusetts, and Greater Boston participated in the MIT Museum’s annual Cambridge Science Festival, a week-long celebration dedicated to creativity, ingenuity, and innovation. Running Monday, Sept. 23 through Sunday, Sept. 29, the 2024 edition was the largest in its history, with a dizzyingly diverse program spanning more than 300 events presented in more than 75 different venues, all free and open to the public.
Presented in partnership with the City of Cambridge and more than 250 collaborators across Greater Boston, this year’s festival comprised a wide range of interactive programs for adults, children, and families, including workshops, demos, keynote lectures, walking tours, professional networking opportunities, and expert panels. Aimed at scientists and non-scientists alike, the festival also collaborated with several local schools to offer visits from an astronaut for middle- and high-school students.
With support from dozens of local organizations, the festival was the first iteration to happen under the new leadership of Michael John Gorman, who was appointed director of the MIT Museum in January and began his position in July.
“A science festival like this has an incredible ability to unite a diverse array of people and ideas, while also showcasing Cambridge as an internationally recognized leader in science, technology, engineering, and math,” says Gorman. “I'm thrilled to have joined an institution that values producing events that foster such a strong sense of community, and was so excited to see the enthusiastic response from the tens of thousands of people who showed up and made the festival such a success.”
The 2024 Cambridge Science Festival was broad in scope, with events ranging from hands-on 3D-printing demos to concerts from the MIT Laptop Ensemble to participatory activities at the MIT Museum’s Maker Hub. This year’s programming also highlighted three carefully curated theme tracks that each encompassed more than 25 associated events:
“For the Win: Games, Puzzles, and the Science of Play” (Thursday) consisted of multiple evening events clustered around Kendall Square.
“Frontiers: A New Era of Space Exploration” (Friday and Saturday) featured programs throughout Boston and was co-curated by The Space Consortium, organizers of Massachusetts Space Week.
“Electric Skin: Wearable Tech and the Future of Fashion” (Saturday) offered both day and evening events at the intersection of science, fabric, and fashion, taking place at The Foundry and co-curated by Boston Fashion Week and Advanced Functional Fabrics of America.
One of the discussions tied to the games-themed “For the Win” track involved artist Jeremy Couillard speaking with MIT Lecturer Mikael Jakobsson about the larger importance of games as a construct for encouraging interpersonal interaction and creating meaningful social spaces. Starting this past summer, the List Visual Arts Center has been the home of Couillard’s first-ever institutional solo exhibition, which centers around “Escape from Lavender Island,” a dystopian third-person, open-world exploration game he released in 2023 on the Steam video-game platform.
For the “Frontiers” space theme, one of the headlining events, “Is Anyone Out There?”, tackled the latest cutting-edge research and theories related to the potential existence of extraterrestrial life. The panel of local astronomers and astrophysicists included Sara Seager, the Class of 1941 Professor of Planetary Science, professor of physics, and professor of aeronautics and astronautics at MIT; Kim Arcand, an expert in astronomic visualization at the Harvard-Smithsonian Center for Astrophysics; and Michael Hecht, a research scientist and associate director of research management at MIT’s Haystack Observatory. The researchers spoke about the tools they and their peers use to try to search for extraterrestrial life, and what discovering life beyond our planet might mean for humanity.
For the “Electric Skin” fashion track, events spanned a range of topics revolving around the role that technology will play in the future of the field, including sold-out workshops where participants learned how to laser-cut and engineer “structural garments.” A panel looking at generative technologies explored how designers are using AI to spur innovation in their companies. Onur Yüce Gün, director of computational design at New Balance, also spoke on a panel with Ziyuan “Zoey” Zhu from IDEO, MIT Media Lab research scientist and architect Behnaz Farahi, and Fiorenzo Omenetto, principal investigator and director of The Tufts Silk Lab and the Frank C. Doble Professor of Engineering at Tufts University and a professor in the Biomedical Engineering Department and in the Department of Physics at Tufts.
Beyond the three themed tracks, the festival comprised an eclectic mix of interactive events and panels. Cambridge Public Library hosted a “Science Story Slam” with high-school students from 10 different states competing for $5,000 in prize money. Entrants shared 5-minute-long stories about their adventures in STEM, with topics ranging from probability to “astro-agriculture.” Judges included several MIT faculty and staff, as well as New York Times national correspondent Kate Zernike.
Elsewhere, the MIT Museum’s Gorman moderated a discussion on AI and democracy that included Audrey Tang, the former minister of digital affairs of Taiwan. The panelists explored how AI tools could combat the polarization of political discourse and increase participation in democratic processes, particularly for marginalized voices. Also in the MIT Museum, the McGovern Institute for Brain Research organized a “Decoding the Brain” event with demos involving real animal brains, while the Broad Institute of MIT and Harvard ran a “Discovery After Dark” event to commemorate the institute’s 20th anniversary. Sunday’s Science Carnival featured more than 100 demos, events, and activities, including the ever-popular “Robot Petting Zoo.”
The treatment – known as repetitive transcranial magnetic stimulation (TMS) – involves placing an electromagnetic coil against the scalp to relay a high-frequency magnetic field to the brain.
Around one in 20 adults is estimated to suffer from depression. Although treatments exist, such as anti-depressant medication and cognitive behavioural therapy (‘talking therapy’), they are ineffective for just under one in three patients.
One of the key characteristics of depression is under-activity of some regions (such as the dorsolateral prefrontal cortex) and over-activity of others (such as the orbitofrontal cortex (OFC)).
Repetitive transcranial magnetic stimulation applied to the left side of the dorsolateral prefrontal cortex (an area at the upper front area of the brain) is approved for treatment of depression in the UK by NICE and in the US by the FDA. It has previously been shown to lead to considerable improvements among patients after a course of 20 sessions, but because the sessions usually take place over 20-30 days, the treatment is not ideal for everyone, particularly in acute cases or where a person is suicidal.
In research published in Psychological Medicine, scientists from Cambridge, UK, and Guiyang, China, tested how effective an accelerated form of TMS is. In this approach, the treatment is given over 20 sessions, but with four sessions per day over a period of five consecutive days.
The researchers also tested a ‘dual’ approach, whereby a magnetic field was additionally applied to the right-hand side of the OFC (which sits below the dorsolateral prefrontal cortex).
Seventy-five patients were recruited to the trial from the Second People’s Hospital of Guizhou Province in China. The severity of their depression was measured on a scale known as the Hamilton Rating Scale of Depression.
Participants were split randomly into three groups: a ‘dual’ group receiving TMS applied first to the right- and then to the left-hand sides of the brain; a ‘single’ group receiving sham TMS to the right-side followed by active TMS applied to the left-side; and a control group receiving a sham treatment to both sides. Each session lasted in total 22 minutes.
There was a significant improvement in scores assessed immediately after the final treatment in the dual treatment group compared to the other two groups. When the researchers looked for clinically-relevant responses – that is, where an individual’s score fell by at least 50% – they found that almost half (48%) of the patients in the dual treatment group saw such a reduction, compared to just under one in five (18%) in the single treatment group and fewer than one in 20 (4%) in the control group.
Four weeks later, around six in 10 participants in both the dual and single treatment groups (61% and 59% respectively) showed clinically relevant responses, compared to just over one in five (22%) in the control group.
Professor Valerie Voon from the Department of Psychiatry at the University of Cambridge, who led the UK side of the study, said: “Our accelerated approach means we can do all of the sessions in just five days, rapidly reducing an individual’s symptoms of depression. This means it could be particularly useful in severe cases of depression, including when someone is experiencing suicidal thoughts. It may also help people be discharged from hospital more rapidly or even avoid admission in the first place.
“The treatment works faster because, by targeting two areas of the brain implicated in depression, we’re effectively correcting imbalances in two import processes, getting brain regions ‘talking’ to each other correctly.”
The treatment was most effective in those patients who at the start of the trial showed greater connectivity between the OFC and the thalamus (an area in the middle of the brain responsible for, among other things, regulation of consciousness, sleep, and alertness). The OFC is important for helping us make decisions, particularly in choosing rewards and avoiding punishment. Its over-activity in depression, particularly in relation to its role in anti-reward or punishment, might help explain why people with depression show a bias towards negative expectations and ruminations.
Dr Yanping Shu from the Guizhou Mental Health Centre, Guiyang, China, said: “This new treatment has demonstrated a more pronounced – and faster – improvement in response rates for patients with major depressive disorder. It represents a significant step forward in improving outcomes, enabling rapid discharge from hospitals for individuals with treatment-resistant depression, and we are hopeful it will lead to new possibilities in mental health care.”
Dr Hailun Cui from Fudan University, a PhD student in Professor Voon’s lab at the time of the study, added: “The management of treatment-resistant depression remains one of the most challenging areas in mental health care. These patients often fail to respond to standard treatments, including medication and psychotherapy, leaving them in a prolonged state of severe distress, functional impairment, and increased risk of suicide.
“This new TMS approach offers a beacon of hope in this difficult landscape. Patients frequently reported experiencing ‘lighter and brighter’ feelings as early as the second day of treatment. The rapid improvements, coupled with a higher response rate that could benefit a broader depressed population, mark a significant breakthrough in the field.”
Just under a half (48%) of participants in the dual treatment group reported local pain where the dual treatment was applied, compared to just under one in 10 (9%) of participants in the single treatment group. However, despite this, there were no dropouts.
For some individuals, this treatment may be sufficient, but for others ‘maintenance therapy’ may be necessary, with an additional day session if their symptoms appear to be worsening over time. It may also be possible to re-administer standard therapy as patients can then become more able to engage in psychotherapy. Other options include using transcranial direct current stimulation, a non-invasive form of stimulation using weak electrical impulses that can be delivered at home.
The researchers are now exploring exactly which part of the orbitofrontal cortex is most effective to target and for which types of depression.
The research was supported by in the UK by the Medical Research Council and by the National Institute for Health and Care Research Cambridge Biomedical Research Centre.*
*A full list of funders is available in the journal paper.
A type of therapy that involves applying a magnetic field to both sides of the brain has been shown to be effective at rapidly treating depression in patients for whom standard treatments have been ineffective.
Our accelerated approach means we can do all of the sessions in just five days, rapidly reducing an individual’s symptoms of depression
Election to the Academy is considered one of the highest honours in the fields of health and medicine and recognises individuals who have demonstrated outstanding professional achievement and commitment to service.
“It is a great honour to have been elected to the National Academy of Medicine,” said Professor Rowitch.
Professor Rowitch obtained his PhD from the University of Cambridge. His research in the field of developmental neurobiology has focused on glial cells that comprise the ‘white matter’ of the human brain. It has furthered understanding human neonatal brain development as well as white matter injury in premature infants, multiple sclerosis and leukodystrophy. Amongst numerous awards, he was elected a Fellow of the Academy of Medical Sciences in 2018 and Fellow of the Royal Society in 2021.
Professor Rowitch’s current interest focuses on functional genomic technologies to better diagnose and treat rare neurogenetic disorders in children. He is academic lead for the new Cambridge Children’s Hospital, developing integrated paediatric physical-mental healthcare and research within the NHS and University of Cambridge.
NAM President Victor J. Dzau said: “This class of new members represents the most exceptional researchers and leaders in health and medicine, who have made significant breakthroughs, led the response to major public health challenges, and advanced health equity.
“Their expertise will be necessary to supporting NAM’s work to address the pressing health and scientific challenges we face today. It is my privilege to welcome these esteemed individuals to the National Academy of Medicine.”
Professor Rowitch is one of 90 regular members and 10 international members announced during the Academy’s annual meeting. New members are elected by current members through a process that recognises individuals who have made major contributions to the advancement of the medical sciences, health care, and public health.
Professor David Rowitch, Head of the Department of Paediatrics at the University of Cambridge, has been elected to the prestigious National Academy of Medicine in the USA.
Latest research has revealed a ‘positive association’ between the number of properties listed as Airbnb rentals and police-reported robberies and violent crimes in thousands of London neighbourhoods between 2015 and 2018.
In fact, the study led by the University of Cambridge suggests that a 10% increase in active Airbnb rentals in the city would correspond to an additional 1,000 robberies per year across London.*
Urban sociologists say the rapid pace at which crime rises in conjunction with new rentals suggests that the link is related more to opportunities for crime, rather than loss of cohesion within communities – although both are likely contributing factors.
“We tested for the most plausible alternative explanations, from changes in police patrols to tourist hotspots and even football matches,” said Dr Charles Lanfear from Cambridge’s Institute of Criminology, co-author of the study published today in the journal Criminology.
“Nothing changed the core finding that Airbnb rentals are related to higher crime rates in London neighbourhoods.”
“While Airbnb offers benefits to tourists and hosts in terms of ease and financial reward, there may be social consequences to turning large swathes of city neighbourhoods into hotels with little regulation,” Lanfear said.
Founded in 2008, Airbnb is a giant of the digital economy, with more than 5 million property hosts now active on the platform in some 100,000 cities worldwide.
However, concerns that Airbnb is contributing to unaffordable housing costs has led to a backlash among residents of cities such as Barcelona, and calls for greater regulation.
London is one of the most popular Airbnb markets in the world. An estimated 4.5 million guests stayed in a London Airbnb during the period covered by the study.
Lanfear and his University of Pennsylvania co-author Professor David Kirk used masses of data from AirDNA: a site that scrapes Airbnb to provide figures, trends and approximate geolocations for the short-term letting market.
They mapped AirDNA data from 13 calendar quarters (January 2015 to March 2018) onto ‘Lower Layer Super Output Areas’, or LSOAs.
These are designated areas of a few streets containing around two thousand residents, used primarily for UK census purposes. There are 4,835 LSOAs in London, and all were included in the study.
Crime statistics from the UK Home Office and Greater London Authority for 6 categories – robbery, burglary, theft, anti-social behaviour, any violence, and bodily harm – were then mapped onto LSOAs populated with AirDNA data.
The researchers analysed all forms of Airbnb lets, but found the link between active Airbnbs and crime is primarily down to entire properties for rent, rather than spare or shared rooms.
The association between active Airbnb rentals and crime was most significant for robbery and burglary, followed by theft and any violence. No link was found for anti-social behaviour and bodily harm.
On average across London, an additional Airbnb property was associated with a 2% increase in the robbery rate within an LSOA. This association was 1% for thefts, 0.9% for burglaries, and 0.5% for violence.
“While the potential criminogenic effect for each Airbnb rental is small, the accumulative effect of dozens in a neighbourhood, or tens of thousands across the city, is potentially huge,” Lanfear said.
He points out that London had an average of 53,000 active lettings in each calendar-quarter of the study period, and an average of 11 lettings per LSOA.
At its most extreme, one neighbourhood in Soho, an area famed for nightlife, had a high of 318 dedicated Airbnbs – some 30% of all households in the LSOA.
The data models suggest that a 3.2% increase in all types of Airbnb rentals per LSOA would correspond to a 1% increase in robberies city-wide: 325 additional robberies based on the figure of 32,500 recorded robberies in London in 2018.
Lanfear and Kirk extensively stress-tested the association between Airbnb listings and London crime rates.
This included factoring in ‘criminogenic variables’ such as property prices, police stops, the regularity of police patrols, and even English Premier League football games (by both incorporating attendance into data modelling, and removing all LSOAs within a kilometre of major games).
The duo re-ran their data models excluding all the 259 LSOAs in central London’s Zone One, to see if the association was limited to high tourism areas with lots of Airbnb listings. The data models even incorporated the seasonal ‘ebb and flow’ of London tourism. Nothing changed the overall trends.
Prior to crunching the numbers, the researchers speculated that any link might be down to Airbnbs affecting ‘collective efficacy’: the social cohesion within a community, combined with a willingness to intervene for the public good.
The study measured levels of collective efficacy across the city using data from both the Metropolitan Police and the Mayor of London’s Office, who conduct surveys on public perceptions of criminal activity and the likely responses of their community.
Collective efficacy across London is not only consistently high, but did not explain the association between Airbnbs and crime in the data models.
Moreover, when Airbnb listings rise, the effect on crime is more immediate than one caused by a slow erosion of collective efficacy. “Crime seems to go up as soon as Airbnbs appear, and stays elevated for as long as they are active,” said Lanfear.
The researchers conclude it is likely driven by criminal opportunity. “A single Airbnb rental can create different types of criminal opportunity,” said Lanfear.
“An Airbnb rental can provide an easy potential victim such as a tourist unfamiliar with the area, or a property that is regularly vacant and so easier to burgle. A very temporary occupant may be more likely to cause criminal damage.”
“Offenders may learn to return to areas with more Airbnbs to find unguarded targets,” said Lanfear. “More dedicated Airbnb properties may mean fewer long-term residents with a personal stake in the area who are willing to report potential criminal activity.”
Airbnb has taken steps to prevent crime, including some background checks as well as requirements for extended bookings on occasions popular for one-night parties, such as New Year’s Eve. “The fact that we still find an increase in crime despite Airbnb’s efforts to curtail it reveals the severity of the predicament,” said Kirk.
Added Lanfear: “Short-term letting sites such as Airbnb create incentives for landlords that lead to property speculation, and we can see the effect on urban housing markets. We can now see that the expansion of Airbnb may contribute to city crime rates.”
“It is not the company or even the property owners who experience the criminogenic side effects of Airbnb, it is the local residents building their lives in the neighbourhood.”
Notes:
*Above 2018 levels, which is when the study data ends.
Rising numbers of houses and flats listed as short-term lets on Airbnb are associated with higher rates of crimes such as burglaries and street robberies right across London, according to the most detailed study of its kind.
There may be social consequences to turning large swathes of city neighbourhoods into hotels with little regulation
Writing in the journal Science Robotics, the research team, led by the University of Cambridge, outline how ‘palaeo-inspired robotics’ could provide a valuable experimental approach to studying how the pectoral and pelvic fins of ancient fish evolved to support weight on land.
“Since fossil evidence is limited, we have an incomplete picture of how ancient life made the transition to land,” said lead author Dr Michael Ishida from Cambridge’s Department of Engineering. “Palaeontologists examine ancient fossils for clues about the structure of hip and pelvic joints, but there are limits to what we can learn from fossils alone. That’s where robots can come in, helping us fill gaps in the research, particularly when studying major shifts in how vertebrates moved.”
Ishida is a member of Cambridge’s Bio-Inspired Robotics Laboratory, led by Professor Fumiya Iida. The team is developing energy-efficient robots for a variety of applications, which take their inspiration from the efficient ways that animals and humans move.
With funding from the Human Frontier Science Program, the team is developing palaeo-inspired robots, in part by taking their inspiration from modern-day ‘walking fish’ such as mudskippers, and from fossils of extinct fish. “In the lab, we can’t make a living fish walk differently, and we certainly can’t get a fossil to move, so we’re using robots to simulate their anatomy and behaviour,” said Ishida.
The team is creating robotic analogues of ancient fish skeletons, complete with mechanical joints that mimic muscles and ligaments. Once complete, the team will perform experiments on these robots to determine how these ancient creatures might have moved.
“We want to know things like how much energy different walking patterns would have required, or which movements were most efficient,” said Ishida. “This data can help confirm or challenge existing theories about how these early animals evolved.”
One of the biggest challenges in this field is the lack of comprehensive fossil records. Many of the ancient species from this period in Earth’s history are known only from partial skeletons, making it difficult to reconstruct their full range of movement.
“In some cases, we’re just guessing how certain bones connected or functioned,” said Ishida. “That’s why robots are so useful—they help us confirm these guesses and provide new evidence to support or rebut them.”
While robots are commonly used to study movement in living animals, very few research groups are using them to study extinct species. “There are only a few groups doing this kind of work,” said Ishida. “But we think it’s a natural fit – robots can provide insights into ancient animals that we simply can’t get from fossils or modern species alone.”
The team hopes that their work will encourage other researchers to explore the potential of robotics to study the biomechanics of long-extinct animals. “We’re trying to close the loop between fossil evidence and real-world mechanics,” said Ishida. “Computer models are obviously incredibly important in this area of research, but since robots are interacting with the real world, they can help us test theories about how these creatures moved, and maybe even why they moved the way they did.”
The team is currently in the early stages of building their palaeo-robots, but they hope to have some results within the next year. The researchers say they hope their robot models will not only deepen understanding of evolutionary biology, but could also open up new avenues of collaboration between engineers and researchers in other fields.
The research was supported by the Human Frontier Science Program. Fumiya Iida is a Fellow of Corpus Christi College, Cambridge. Michael Ishida a Postdoctoral Research Associate at Gonville and Caius College, Cambridge.
The transition from water to land is one of the most significant events in the history of life on Earth. Now, a team of roboticists, palaeontologists and biologists is using robots to study how the ancestors of modern land animals transitioned from swimming to walking, about 390 million years ago.
Pioneer Fellow Hao Liu uses lasers to produce microfilament structures to grow biological tissue in the lab for research and medicine – from muscle tissue to cartilage. Now he’s working to ready this technology for the market.
The next generation of biomedical innovators and entrepreneurs from around the globe converged at the University of Melbourne today to explore how they can help the industry enhance the research commercialisation ecosystem.
In the classic cartoon “The Jetsons,” Rosie the robotic maid seamlessly switches from vacuuming the house to cooking dinner to taking out the trash. But in real life, training a general-purpose robot remains a major challenge.
Typically, engineers collect data that are specific to a certain robot and task, which they use to train the robot in a controlled environment. However, gathering these data is costly and time-consuming, and the robot will likely struggle to adapt to environments or tasks it hasn’t seen before.
To train better general-purpose robots, MIT researchers developed a versatile technique that combines a huge amount of heterogeneous data from many of sources into one system that can teach any robot a wide range of tasks.
Their method involves aligning data from varied domains, like simulations and real robots, and multiple modalities, including vision sensors and robotic arm position encoders, into a shared “language” that a generative AI model can process.
By combining such an enormous amount of data, this approach can be used to train a robot to perform a variety of tasks without the need to start training it from scratch each time.
This method could be faster and less expensive than traditional techniques because it requires far fewer task-specific data. In addition, it outperformed training from scratch by more than 20 percent in simulation and real-world experiments.
“In robotics, people often claim that we don’t have enough training data. But in my view, another big problem is that the data come from so many different domains, modalities, and robot hardware. Our work shows how you’d be able to train a robot with all of them put together,” says Lirui Wang, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on this technique.
Wang’s co-authors include fellow EECS graduate student Jialiang Zhao; Xinlei Chen, a research scientist at Meta; and senior author Kaiming He, an associate professor in EECS and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL). The research will be presented at the Conference on Neural Information Processing Systems.
Inspired by LLMs
A robotic “policy” takes in sensor observations, like camera images or proprioceptive measurements that track the speed and position a robotic arm, and then tells a robot how and where to move.
Policies are typically trained using imitation learning, meaning a human demonstrates actions or teleoperates a robot to generate data, which are fed into an AI model that learns the policy. Because this method uses a small amount of task-specific data, robots often fail when their environment or task changes.
To develop a better approach, Wang and his collaborators drew inspiration from large language models like GPT-4.
These models are pretrained using an enormous amount of diverse language data and then fine-tuned by feeding them a small amount of task-specific data. Pretraining on so much data helps the models adapt to perform well on a variety of tasks.
“In the language domain, the data are all just sentences. In robotics, given all the heterogeneity in the data, if you want to pretrain in a similar manner, we need a different architecture,” he says.
Robotic data take many forms, from camera images to language instructions to depth maps. At the same time, each robot is mechanically unique, with a different number and orientation of arms, grippers, and sensors. Plus, the environments where data are collected vary widely.
The MIT researchers developed a new architecture called Heterogeneous Pretrained Transformers (HPT) that unifies data from these varied modalities and domains.
They put a machine-learning model known as a transformer into the middle of their architecture, which processes vision and proprioception inputs. A transformer is the same type of model that forms the backbone of large language models.
The researchers align data from vision and proprioception into the same type of input, called a token, which the transformer can process. Each input is represented with the same fixed number of tokens.
Then the transformer maps all inputs into one shared space, growing into a huge, pretrained model as it processes and learns from more data. The larger the transformer becomes, the better it will perform.
A user only needs to feed HPT a small amount of data on their robot’s design, setup, and the task they want it to perform. Then HPT transfers the knowledge the transformer grained during pretraining to learn the new task.
Enabling dexterous motions
One of the biggest challenges of developing HPT was building the massive dataset to pretrain the transformer, which included 52 datasets with more than 200,000 robot trajectories in four categories, including human demo videos and simulation.
The researchers also needed to develop an efficient way to turn raw proprioception signals from an array of sensors into data the transformer could handle.
“Proprioception is key to enable a lot of dexterous motions. Because the number of tokens is in our architecture always the same, we place the same importance on proprioception and vision,” Wang explains.
When they tested HPT, it improved robot performance by more than 20 percent on simulation and real-world tasks, compared with training from scratch each time. Even when the task was very different from the pretraining data, HPT still improved performance.
“This paper provides a novel approach to training a single policy across multiple robot embodiments. This enables training across diverse datasets, enabling robot learning methods to significantly scale up the size of datasets that they can train on. It also allows the model to quickly adapt to new robot embodiments, which is important as new robot designs are continuously being produced,” says David Held, associate professor at the Carnegie Mellon University Robotics Institute, who was not involved with this work.
In the future, the researchers want to study how data diversity could boost the performance of HPT. They also want to enhance HPT so it can process unlabeled data like GPT-4 and other large language models.
“Our dream is to have a universal robot brain that you could download and use for your robot without any training at all. While we are just in the early stages, we are going to keep pushing hard and hope scaling leads to a breakthrough in robotic policies, like it did with large language models,” he says.
This work was funded, in part, by the Amazon Greater Boston Tech Initiative and the Toyota Research Institute.
When you think about hands-free devices, you might picture Alexa and other voice-activated in-home assistants, Bluetooth earpieces, or asking Siri to make a phone call in your car. You might not imagine using your mouth to communicate with other devices like a computer or a phone remotely.
Thinking outside the box, MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and Aarhus University researchers have now engineered “MouthIO,” a dental brace that can be fabricated with sensors and feedback components to capture in-mouth interactions and data. This interactive wearable could eventually assist dentists and other doctors with collecting health data and help motor-impaired individuals interact with a phone, computer, or fitness tracker using their mouths.
Resembling an electronic retainer, MouthIO is a see-through brace that fits the specifications of your upper or lower set of teeth from a scan. The researchers created a plugin for the modeling software Blender to help users tailor the device to fit a dental scan, where you can then 3D print your design in dental resin. This computer-aided design tool allows users to digitally customize a panel (called PCB housing) on the side to integrate electronic components like batteries, sensors (including detectors for temperature and acceleration, as well as tongue-touch sensors), and actuators (like vibration motors and LEDs for feedback). You can also place small electronics outside of the PCB housing on individual teeth.
The active mouth
“The mouth is a really interesting place for an interactive wearable and can open up many opportunities, but has remained largely unexplored due to its complexity,” says senior author Michael Wessely, a former CSAIL postdoc and senior author on a paper about MouthIO who is now an assistant professor at Aarhus University. “This compact, humid environment has elaborate geometries, making it hard to build a wearable interface to place inside. With MouthIO, though, we’ve developed a new kind of device that’s comfortable, safe, and almost invisible to others. Dentists and other doctors are eager about MouthIO for its potential to provide new health insights, tracking things like teeth grinding and potentially bacteria in your saliva.”
The excitement for MouthIO’s potential in health monitoring stems from initial experiments. The team found that their device could track bruxism (the habit of grinding teeth) by embedding an accelerometer within the brace to track jaw movements. When attached to the lower set of teeth, MouthIO detected when users grind and bite, with the data charted to show how often users did each.
Wessely and his colleagues’ customizable brace could one day help users with motor impairments, too. The team connected small touchpads to MouthIO, helping detect when a user’s tongue taps their teeth. These interactions could be sent via Bluetooth to scroll across a webpage, for example, allowing the tongue to act as a “third hand” to open up a new avenue for hands-free interaction.
"MouthIO is a great example how miniature electronics now allow us to integrate sensing into a broad range of everyday interactions,” says study co-author Stefanie Mueller, the TIBCO Career Development Associate Professor in the MIT departments of Electrical Engineering and Computer Science and Mechanical Engineering and leader of the HCI Engineering Group at CSAIL. “I'm especially excited about the potential to help improve accessibility and track potential health issues among users."
Molding and making MouthIO
To get a 3D model of your teeth, you can first create a physical impression and fill it with plaster. You can then scan your mold with a mobile app like Polycam and upload that to Blender. Using the researchers’ plugin within this program, you can clean up your dental scan to outline a precise brace design. Finally, you 3D print your digital creation in clear dental resin, where the electronic components can then be soldered on. Users can create a standard brace that covers their teeth, or opt for an “open-bite” design within their Blender plugin. The latter fits more like open-finger gloves, exposing the tips of your teeth, which helps users avoid lisping and talk naturally.
This “do it yourself” method costs roughly $15 to produce and takes two hours to be 3D-printed. MouthIO can also be fabricated with a more expensive, professional-level teeth scanner similar to what dentists and orthodontists use, which is faster and less labor-intensive.
Compared to its closed counterpart, which fully covers your teeth, the researchers view the open-bite design as a more comfortable option. The team preferred to use it for beverage monitoring experiments, where they fabricated a brace capable of alerting users when a drink was too hot. This iteration of MouthIO had a temperature sensor and a monitor embedded within the PCB housing that vibrated when a drink exceeded 65 degrees Celsius (or 149 degrees Fahrenheit). This could help individuals with mouth numbness better understand what they’re consuming.
In a user study, participants also preferred the open-bite version of MouthIO. “We found that our device could be suitable for everyday use in the future,” says study lead author and Aarhus University PhD student Yijing Jiang. “Since the tongue can touch the front teeth in our open-bite design, users don’t have a lisp. This made users feel more comfortable wearing the device during extended periods with breaks, similar to how people use retainers.”
The team’s initial findings indicate that MouthIO is a cost-effective, accessible, and customizable interface, and the team is working on a more long-term study to evaluate its viability further. They’re looking to improve its design, including experimenting with more flexible materials, and placing it in other parts of the mouth, like the cheek and the palate. Among these ideas, the researchers have already prototyped two new designs for MouthIO: a single-sided brace for even higher comfort when wearing MouthIO while also being fully invisible to others, and another fully capable of wireless charging and communication.
Jiang, Mueller, and Wessely’s co-authors include PhD student Julia Kleinau, master’s student Till Max Eckroth, and associate professor Eve Hoggan, all of Aarhus University. Their work was supported by a Novo Nordisk Foundation grant and was presented at ACM’s Symposium on User Interface Software and Technology.
When you think about hands-free devices, you might picture Alexa and other voice-activated in-home assistants, Bluetooth earpieces, or asking Siri to make a phone call in your car. You might not imagine using your mouth to communicate with other devices like a computer or a phone remotely.
Thinking outside the box, MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and Aarhus University researchers have now engineered “MouthIO,” a dental brace that can be fabricated with sensors and feedback components to capture in-mouth interactions and data. This interactive wearable could eventually assist dentists and other doctors with collecting health data and help motor-impaired individuals interact with a phone, computer, or fitness tracker using their mouths.
Resembling an electronic retainer, MouthIO is a see-through brace that fits the specifications of your upper or lower set of teeth from a scan. The researchers created a plugin for the modeling software Blender to help users tailor the device to fit a dental scan, where you can then 3D print your design in dental resin. This computer-aided design tool allows users to digitally customize a panel (called PCB housing) on the side to integrate electronic components like batteries, sensors (including detectors for temperature and acceleration, as well as tongue-touch sensors), and actuators (like vibration motors and LEDs for feedback). You can also place small electronics outside of the PCB housing on individual teeth.
The active mouth
“The mouth is a really interesting place for an interactive wearable and can open up many opportunities, but has remained largely unexplored due to its complexity,” says senior author Michael Wessely, a former CSAIL postdoc and senior author on a paper about MouthIO who is now an assistant professor at Aarhus University. “This compact, humid environment has elaborate geometries, making it hard to build a wearable interface to place inside. With MouthIO, though, we’ve developed a new kind of device that’s comfortable, safe, and almost invisible to others. Dentists and other doctors are eager about MouthIO for its potential to provide new health insights, tracking things like teeth grinding and potentially bacteria in your saliva.”
The excitement for MouthIO’s potential in health monitoring stems from initial experiments. The team found that their device could track bruxism (the habit of grinding teeth) by embedding an accelerometer within the brace to track jaw movements. When attached to the lower set of teeth, MouthIO detected when users grind and bite, with the data charted to show how often users did each.
Wessely and his colleagues’ customizable brace could one day help users with motor impairments, too. The team connected small touchpads to MouthIO, helping detect when a user’s tongue taps their teeth. These interactions could be sent via Bluetooth to scroll across a webpage, for example, allowing the tongue to act as a “third hand” to open up a new avenue for hands-free interaction.
"MouthIO is a great example how miniature electronics now allow us to integrate sensing into a broad range of everyday interactions,” says study co-author Stefanie Mueller, the TIBCO Career Development Associate Professor in the MIT departments of Electrical Engineering and Computer Science and Mechanical Engineering and leader of the HCI Engineering Group at CSAIL. “I'm especially excited about the potential to help improve accessibility and track potential health issues among users."
Molding and making MouthIO
To get a 3D model of your teeth, you can first create a physical impression and fill it with plaster. You can then scan your mold with a mobile app like Polycam and upload that to Blender. Using the researchers’ plugin within this program, you can clean up your dental scan to outline a precise brace design. Finally, you 3D print your digital creation in clear dental resin, where the electronic components can then be soldered on. Users can create a standard brace that covers their teeth, or opt for an “open-bite” design within their Blender plugin. The latter fits more like open-finger gloves, exposing the tips of your teeth, which helps users avoid lisping and talk naturally.
This “do it yourself” method costs roughly $15 to produce and takes two hours to be 3D-printed. MouthIO can also be fabricated with a more expensive, professional-level teeth scanner similar to what dentists and orthodontists use, which is faster and less labor-intensive.
Compared to its closed counterpart, which fully covers your teeth, the researchers view the open-bite design as a more comfortable option. The team preferred to use it for beverage monitoring experiments, where they fabricated a brace capable of alerting users when a drink was too hot. This iteration of MouthIO had a temperature sensor and a monitor embedded within the PCB housing that vibrated when a drink exceeded 65 degrees Celsius (or 149 degrees Fahrenheit). This could help individuals with mouth numbness better understand what they’re consuming.
In a user study, participants also preferred the open-bite version of MouthIO. “We found that our device could be suitable for everyday use in the future,” says study lead author and Aarhus University PhD student Yijing Jiang. “Since the tongue can touch the front teeth in our open-bite design, users don’t have a lisp. This made users feel more comfortable wearing the device during extended periods with breaks, similar to how people use retainers.”
The team’s initial findings indicate that MouthIO is a cost-effective, accessible, and customizable interface, and the team is working on a more long-term study to evaluate its viability further. They’re looking to improve its design, including experimenting with more flexible materials, and placing it in other parts of the mouth, like the cheek and the palate. Among these ideas, the researchers have already prototyped two new designs for MouthIO: a single-sided brace for even higher comfort when wearing MouthIO while also being fully invisible to others, and another fully capable of wireless charging and communication.
Jiang, Mueller, and Wessely’s co-authors include PhD student Julia Kleinau, master’s student Till Max Eckroth, and associate professor Eve Hoggan, all of Aarhus University. Their work was supported by a Novo Nordisk Foundation grant and was presented at ACM’s Symposium on User Interface Software and Technology.
“As you step into the world, may you step out to lead with purpose, with integrity, with compassion, not just for your career but for the greater good.”
This was the clarion call sounded by Ms Denise Phua, Member of Parliament for Jalan Besar GRC and Mayor for Central Singapore District, to a 100-strong audience at a recent talk where she shared insights on career transitions and effective leadership.
Titled “Lessons from a Leadership Journey into the Private, Public and People Sector”, the session at the Shaw Foundation Alumni House kicked off the new FASS Distinguished Speaker Series that was launched by the NUS Faculty of Arts and Social Sciences (FASS) in celebration of the Faculty’s 95th anniversary. The series aims to inspire FASS students to excellence by showcasing alumni and their achievements.
The inaugural session, moderated by Professor Lionel Wee, Dean of FASS, was also attended by FASS staff and faculty.
Developing a leadership toolkit and other leadership goals
Ms Phua, who worked in the corporate, people (or social) and public sectors noted some differences in work culture and performance measurements in each of these sectors.
Performance in the private sector is typically measured in financial terms and is quantified with metrics such as market share, revenue growth, and shareholder values. The social sector values qualitative outcomes, with the focus shifting to doing good, creating societal impact, and accountability to beneficiaries, while the public sector operates within the frame of public service, governance, and accountability to the broader society.
Whichever sector one is in, it is essential to build a leadership toolkit comprising the four skills — personal mastery, interpersonal leadership, supervisory leadership and organisational leadership — in order to chart a meaningful and successful career path.
Ms Phua also emphasised the importance of having diverse experiences to remain relevant, and venturing outside one’s comfort zone in a dynamic environment to keep this toolkit updated. Citing Pope Francis, who told Singaporean youths in a recent dialogue that a young person who stays in his own comfort becomes “fat”, she urged young people to continually add skills to their toolkit and to go out there to take risks.
In her experience, having a purpose is also key as it helps to keep one’s eye on the bigger picture and the important goals to work towards. She explained, “Having a clear purpose guides our decisions and keeps us focused on what truly matters. For me personally, my faith helps to anchor me.”
Be fearless in the unknown
Another topic Ms Phua delved into was fear, something she learnt to overcome whenever she was thrust into unfamiliar situations and environments – whether it was having to work overseas with people of different backgrounds and cultures in China and the US, or having to establish new work processes or systems from scratch.
Sharing about her stint as a waitress in Palo Alto, driving alone in her twenties in the wilderness between small American towns for work, and working with people of vastly different cultures, accents and habits from her own, Ms Phua revealed that these intimidating early experiences eventually helped her overcome her fear of trying new things.
Raising a son with autism further bolstered her courage to pioneer new initiatives and projects, particularly in the area of advocacy and support for those with special needs. Stemming from a desire to create a more inclusive society where everyone can thrive, Ms Phua has made trailblazing contributions to the special needs advocacy space, which include the co-founding of Pathlight School – Singapore’s first school that offers mainstream education for autistic children, The Purple Parade – a national platform that celebrates the abilities of persons with special needs, and The Purple Symphony, Singapore’s largest inclusive orchestra set up by her and her team at the Central Singapore District Community Development Council (CDC).
She shared, “[Having the technical skills and soft skills] were really important for me in the private, people and public sector, but what was more significant was scaling the challenges I faced and overcoming the fears I had.”
As a leader, Ms Phua stressed that it is also essential to find a “tribe” and surround oneself with like-minded people who share the same values, passions or vision, and who are able to challenge one another, as the success of any project is never dependent on a single individual. This is how she managed to ideate and execute the many initiatives across the three sectors she has been involved in.
Beyond building careers to serving society
Ms Phua’s candid and personal sharing was followed by questions from students about how to choose career pathways, to make meaningful change in the community and tackle evolving social challenges in society.
Responding to a question on how she found her calling in the special needs advocacy space, Ms Phua shared that she did not identify it through any conventional career or personality quizzes. For her, she felt called to the special needs space in a rather dramatic fashion, after “God gave [her] a child who didn’t speak nor socialise at three, like other typical children.”
Referencing some of the other community initiatives and assistance schemes launched during her time as Mayor of the Central Singapore District, such as the ‘Weekly Nurture’ problem-solving and communications classes and the ‘Ready for School’ financial assistance scheme for children from low-income families, another student asked what Ms Phua considered key ingredients for success in these community projects.
Her response was simple – find the gaps, think of ways to make a difference, and then start tapping on one’s network of resources to implement solutions.
“When looking at physical wellness for seniors for example, I will ask what physical wellness means to them. Together with my team, we then brainstorm, come up with programmes, such as our CDC’s ‘Silver Homes’ and then look for resources to fund them,” she added.
When asked more broadly how Singapore can address societal issues such as an ageing population and growing income disparity, Ms Phua observed that Singapore has become more complex in many ways since 2006 when she first entered politics, making tackling such issues more challenging. However, she encouraged students to look beyond their “pet passions” and serve in areas in society that have the most need, and to stay mission-focused to continue effecting real change.
She cited the ‘power of one’. “If each of us is determined to bless other people with what we have…whether it’s our talent, our gift, our networks, then organisations, nations and societies will all be for the better,” said Ms Phua.
The National University of Singapore (NUS) will be taking a significant leap forward in synthetic biology, leveraging its deep expertise and cutting-edge innovations to usher a new paradigm for green manufacturing, driven by this rapidly evolving field. Over the next six years, the University plans to dedicate substantial resources and efforts – estimated to value around S$120 million - to firmly establish synthetic biology as a foundational pillar of NUS’ innovation ecosystem, driving transformative benefits for Singapore across multiple sectors.
Traditionally, manufacturing is heavily dependent on petrochemicals, a major contributor to the climate crisis. However, synthetic biology (SynBio), enablingthe design and engineering of biological ‘factories’ to create more efficient, sustainable processes and products, is emerging as a game-changer in driving the chemical industry towards a greener future. This groundbreaking approach has the potential to significantly reduce environmental impact, paving the way for greater sustainability across diverse sectors, including food, textiles, flavours, and fuels. This shift promises widespread positive effects, reshaping industries and advancing the global effort to combat climate change.
“NUS is charging ahead, pioneering efforts to strengthen and expand the University’s synthetic biology ecosystem, positioning Singapore at the forefront of tackling global challenges such as food security, energy resilience, and sustainable development. We are deeply committed to surpassing previous accomplishments and achieving new heights of excellence in synthetic biology,” said Professor Liu Bin, NUS Deputy President (Research and Technology).
She added, “A vibrant synthetic biology ecosystem in NUS and Singapore will foster new industry partnerships, cultivate a highly skilled workforce and inspire a wave of innovative startups. Together, these developments are poised to boost Singapore’s economic growth.”
Associate Professor Matthew Chang, Director of SynCTI, said, “Now is the time for synthetic biology to achieve a lasting, transformative impact. Over the past decade, NUS has developed robust capabilities and strategic networks, positioning both NUS and Singapore to seize emerging opportunities in this rapidly advancing field. We are eager to collaborate closely with our academic, research, and industry partners to foster ongoing growth, innovation, and the application of synthetic biology, both locally in Singapore and globally.”
Embarking on new SynBio initiatives
NUS has launched several bold endeavours to realise the vision of a SynBio-driven future:
1. NUS is spearheading the establishment of Singapore’s new national SynBio initiativeto advance Singapore’s biomanufacturing sector. This initiative will foster a whole-of-nation effort to galvanise the potential of SynBio in advancing green manufacturing practices. Please refer to Annexe 1 for more information on this initiative.
2. To further augment its research efforts, NUS will collaborate extensively with global leaders in SynBio, with the aim of creating a powerful multiplier effect. Some exciting research collaborations include:
a) Partnering with the University of Illinois Urbana-Champaign (UIUC) to develop reliable and cost-effective methods for producing safe, nutritious, and delicious foods through SynBio-based precision fermentation;
b) Working with the Shanghai Jiao Tong University (SJTU) to develop efficient cyanobacterial (blue-green algae) cell factories and other microorganisms to convert carbon dioxide (CO2) directly into biomaterials and biofuels; and
c) Teaming up with the French National Centre for Scientific Research (CNRS)to demonstrate the feasibility of converting green hydrogen and concentrated CO2 into sustainable biofuels.
These joint projects, supported by the Campus for Research Excellence and Technological Enterprise (CREATE) under the National Research Foundation Singapore (NRF), aim to build strategic capabilities in SynBio. Please refer to Annexe 2 for more information on these collaborative programmes.
A decade in the making for the SynBio revolution in Singapore
The creation of NUS’ dynamic SynBio ecosystem started a decade ago with foresight and vision.
SynCTI, which was established in 2014, has played a key role in creating new knowledge and develop foundational technologies in synthetic biology and grooming the next generation of highly skilled researchers equipped with fundamental science and translational research capabilities. SynCTI will commemorate its 10th anniversary with a celebratory event in November 2024.
The formation of SynCTI catalysed the setting up of the Singapore Consortium for Synthetic Biology (SINERGY) to consolidate Singapore’s capabilities in synthetic biology and harness synergies across industry sectors to create a vibrant and globally connected bio-based economy in Singapore. SINERGY is supported by the National Research Foundation and is currently hosted under Consortium Management Office, A*STAR. Today, SINERGY has nine academic partners and 27 industry partners, working hand in hand to unlock Singapore’s bio-potential.
Another key component of NUS’ SynBio ecosystem is the WIL@NUS Corporate Laboratory, a research partnership between NUS and Wilmar International Limited to demonstrate the translation of academic SynBio research through collaboration with the industry.
Set up in June 2018 and hosted at the NUS Yong Loo Lin School of Medicine, the WIL@NUS Corporate Laboratory leverages the expertise of Wilmar and NUS to develop sustainable, efficient, and cost-effective bio-based methods for the production of industrial chemicals. This successful academic-industry partnership has led to the development of enzymes and microbes for the biomanufacturing of oleochemicals.
With a strong foundation in place, NUS is strategically positioned to lead in the field of synthetic biology.
The National University of Singapore (NUS) will be taking a significant leap forward in synthetic biology, leveraging its deep expertise and cutting-edge innovations to usher a new paradigm for green manufacturing, driven by this rapidly evolving field. Over the next six years, the University plans to dedicate substantial resources and efforts – estimated to value around S$120 million - to firmly establish synthetic biology as a foundational pillar of NUS’ innovation ecosystem, driving transformative benefits for Singapore across multiple sectors.
Traditionally, manufacturing is heavily dependent on petrochemicals, a major contributor to the climate crisis. However, synthetic biology (SynBio), enablingthe design and engineering of biological ‘factories’ to create more efficient, sustainable processes and products, is emerging as a game-changer in driving the chemical industry towards a greener future. This groundbreaking approach has the potential to significantly reduce environmental impact, paving the way for greater sustainability across diverse sectors, including food, textiles, flavours, and fuels. This shift promises widespread positive effects, reshaping industries and advancing the global effort to combat climate change.
“NUS is charging ahead, pioneering efforts to strengthen and expand the University’s synthetic biology ecosystem, positioning Singapore at the forefront of tackling global challenges such as food security, energy resilience, and sustainable development. We are deeply committed to surpassing previous accomplishments and achieving new heights of excellence in synthetic biology,” said Professor Liu Bin, NUS Deputy President (Research and Technology).
She added, “A vibrant synthetic biology ecosystem in NUS and Singapore will foster new industry partnerships, cultivate a highly skilled workforce and inspire a wave of innovative startups. Together, these developments are poised to boost Singapore’s economic growth.”
Associate Professor Matthew Chang, Director of SynCTI, said, “Now is the time for synthetic biology to achieve a lasting, transformative impact. Over the past decade, NUS has developed robust capabilities and strategic networks, positioning both NUS and Singapore to seize emerging opportunities in this rapidly advancing field. We are eager to collaborate closely with our academic, research, and industry partners to foster ongoing growth, innovation, and the application of synthetic biology, both locally in Singapore and globally.”
Embarking on new SynBio initiatives
NUS has launched several bold endeavours to realise the vision of a SynBio-driven future:
1. NUS is spearheading the establishment of Singapore’s new national SynBio initiativeto advance Singapore’s biomanufacturing sector. This initiative will foster a whole-of-nation effort to galvanise the potential of SynBio in advancing green manufacturing practices. Please refer to Annexe 1 for more information on this initiative.
2. To further augment its research efforts, NUS will collaborate extensively with global leaders in SynBio, with the aim of creating a powerful multiplier effect. Some exciting research collaborations include:
a) Partnering with the University of Illinois Urbana-Champaign (UIUC) to develop reliable and cost-effective methods for producing safe, nutritious, and delicious foods through SynBio-based precision fermentation;
b) Working with the Shanghai Jiao Tong University (SJTU) to develop efficient cyanobacterial (blue-green algae) cell factories and other microorganisms to convert carbon dioxide (CO2) directly into biomaterials and biofuels; and
c) Teaming up with the French National Centre for Scientific Research (CNRS)to demonstrate the feasibility of converting green hydrogen and concentrated CO2 into sustainable biofuels.
These joint projects, supported by the Campus for Research Excellence and Technological Enterprise (CREATE) under the National Research Foundation Singapore (NRF), aim to build strategic capabilities in SynBio. Please refer to Annexe 2 for more information on these collaborative programmes.
A decade in the making for the SynBio revolution in Singapore
The creation of NUS’ dynamic SynBio ecosystem started a decade ago with foresight and vision.
SynCTI, which was established in 2014, has played a key role in creating new knowledge and develop foundational technologies in synthetic biology and grooming the next generation of highly skilled researchers equipped with fundamental science and translational research capabilities. SynCTI will commemorate its 10th anniversary with a celebratory event in November 2024.
The formation of SynCTI catalysed the setting up of the Singapore Consortium for Synthetic Biology (SINERGY) to consolidate Singapore’s capabilities in synthetic biology and harness synergies across industry sectors to create a vibrant and globally connected bio-based economy in Singapore. SINERGY is supported by the National Research Foundation and is currently hosted under Consortium Management Office, A*STAR. Today, SINERGY has nine academic partners and 27 industry partners, working hand in hand to unlock Singapore’s bio-potential.
Another key component of NUS’ SynBio ecosystem is the WIL@NUS Corporate Laboratory, a research partnership between NUS and Wilmar International Limited to demonstrate the translation of academic SynBio research through collaboration with the industry.
Set up in June 2018 and hosted at the NUS Yong Loo Lin School of Medicine, the WIL@NUS Corporate Laboratory leverages the expertise of Wilmar and NUS to develop sustainable, efficient, and cost-effective bio-based methods for the production of industrial chemicals. This successful academic-industry partnership has led to the development of enzymes and microbes for the biomanufacturing of oleochemicals.
With a strong foundation in place, NUS is strategically positioned to lead in the field of synthetic biology.
This past weekend, 67 teams from 24 countries competed in ETH Zurich’s Cybathlon – fighting not only for victory, but also for the advancement of assistance technologies that are more suitable for everyday use. The third edition of the competition for people with disabilities and experimental assistive technologies was a complete success.
World appears on track for even more dangerous Cold War 2.0
Pulitzer winner warns China, which is building nuclear arsenal, would be third major player besides U.S., Russia — and six other nations now have bombs, too
Liz Mineo
Harvard Staff Writer
5 min read
China has significantly picked up the pace of expanding its nuclear arsenal in recent years, a development that increases the likelihood a new arms race will begin revving up, one that could be more dangerous than the Cold War contest between Russia and the U.S., according to two-time Pulitzer Prize-winning journalist David Hoffman.
In a talk on Tuesday, sponsored by the Davis Center for Russian and Eurasian Studies, Hoffman raised the alarm about a “new nuclear age,” with three countries leading the pack and a handful of other countries already developing their own nuclear weapons.
“It’s time for us to wake up,” said Hoffman. “It’s time for us to get over the vacation that we’ve had since the end of the Cold War.”
The period of geopolitical tension between the U.S. and the Soviet Union, and their respective allies, began in 1946 and ended in 1991 with the fall of the Soviet Union. That year, President George H.W. Bush and Soviet leader Mikhail Gorbachev signed the Strategic Arms Reduction Treaty (START), which called for both nations to reduce their nuclear arsenals. The initiative to control nuclear weapons was first proposed by President Ronald Reagan in 1981. The START treaty will expire in February 2026.
A new arms race poses all the risks for nuclear Armageddon as before and then some, said Hoffman. It will multiply the possibilities for mistakes and misperceptions, and it will take effort to find out whether the other two competitors are U.S. adversaries, he said.
“A three-way race will be infinitely more difficult to negotiate than was the Cold War between two sides,” said Hoffman. “A three-way race with all the different kinds of forces, threats, and possibilities is a bit like a diplomatic Rubik’s cube. It’s not going to be easy to use diplomacy to solve it.”
Hoffman, who won his first Pulitzer for his 2009 book, “The Dead Hand: The Untold Story of the Cold War Arms Race and Its Dangerous Legacy,” said he worries about the lack of political will among world leaders to rid the world of nuclear weapons.
“Nuclear weapons are political weapons,” Hoffman said. “They’re instruments of threat and of coercion, and they require political will to restrain. Reagan and Gorbachev for their own reasons summoned up the political will to get rid of these weapons … Now we’re at a time of growing danger, a time when political will is absent, and that’s why I’m alarmed.”
A contributing editor to The Washington Post, Hoffman won his most recent Pulitzer this year for a series of editorials on technologies and tactics used by authoritarian regimes to stifle dissent. He noted that China has 500 nuclear warheads and may reach 1,000 by 2030, according to the Pentagon.
“China seems to be planning to try to match the United States and Russian nuclear arsenals to get to about 1,500 warheads,” said Hoffman. “China is accelerating the arms race, and I think it’s very worrisome that China refuses to enter into negotiations. The Chinese basically say, ‘Wait till we get to be peers with Russia and the United States, and then we’ll talk about that.’’’
The fear of a nuclear war is not unfounded, said Hoffman, not only because there are more nuclear-armed countries — a group that includes France, the U.K., Pakistan, India, North Korea, and Israel — than ever before, but also because of the potential impact of artificial intelligence on early warning systems.
The possibility of a false alarm that could lead to a nuclear attack is real, said Hoffman, who wrote about a series of failures in the U.S. early warning system for ballistic missile defense in his book about the Cold War.
Between 1960 and 1976, there were seven false alarms, and between 1979 and 1980, there were five. In 1983, the Soviet early warning system also registered a false alarm. And more recently, in 2022, India accidentally fired a missile into Pakistan. India later said it was due to a “technical malfunction” during routine maintenance.
“We cannot grow complacent about the possibility of mistake, a misperception or a misunderstanding leading to a nuclear explosion,” said Hoffman. “If artificial intelligence says, ‘We’re under attack,’ would you believe it? Would you not believe it? Would you put the fate of the Earth in the hands of ChatGPT?”
This week’s event was part of the series “Russia: In Search of a New Paradigm — Conversations with Yevgenia Albats,” a prominent Russian investigative journalist and political scientist who received her Ph.D. in political science from Harvard in 2004 and is currently a visiting scholar at the Davis Center.
Although the scenario seems bleak, Hoffman said the U.S. could take actions to prevent a nuclear apocalypse, including strengthening ties with U.S. allies, and working on nuclear weapons risk-reduction measures and robust arms control treaties with both China and Russia.
“American people just have to hear that this looming arms race is coming in,” said Hoffman. “Ultimately, though, it comes back to political will … Because that’s ultimately the way to restrain political weapons … I’d love to see us get back to that Reagan-Gorbachev magic moment.”
Physical chemist Giacinto Scoles, Princeton’s Donner Professor of Science, Emeritus, died in Sassenheim, the Netherlands, on Sept. 25 with his wife of nearly 60 years at his side. He was 89.
Outside of the U.S., how do leaders view Harris and Trump?
Weatherhead panelists offer insights on geopolitical stakes of presidential election
Christy DeSmith
Harvard Staff Writer
5 min read
Next month’s U.S. presidential election is being closely watched across the world. But do foreign governments have a clear preference for either Vice President Kamala Harris or former President Donald J. Trump?
According to the Oct. 9 Weatherhead Center for International Affairs forum on the election’s geopolitical stakes, many international observers remain ambivalent on the race. The panel featured four experts weighing in on the global hotspots of China, Russia, the Middle East, and Latin America.
Russian President Vladimir Putin is probably inclined toward Trump, according to Timothy J. Colton, the Morris and Anna Feldberg Professor of Government and Russian Studies Chair of the Harvard Academy for International and Area Studies. “But the Russians will point out that Trump, in the end, did Russia no favors in his first term,” Colton said. “At least at the leadership level, they’re by and large convinced nothing good is going to come in the election from Russia’s point of view.”
At the moment, Moscow appears less invested in the U.S. election than in the last two cycles, Colton observed. “The Russians have turned away almost every shred of cooperation with the United States of America, and at least verbally and rhetorically they are fashioning a pretty radical new image of where they belong in the world.” As examples, he cited top thinkers in Russia who speak of becoming an Asian country and Putin himself describing it as a civilization state, or a unique culture descended from an empire much like China.
What’s more, Colton added, the Russian establishment sees far more opportunity in countries like Germany, Hungary, and the Czech Republic for undermining support for Ukraine. “It is reasonable to expect the war in Ukraine will come to an end, or at least to a point of suspension, in the next U.S. presidential term,” he added. “But my own sense is the geopolitical outcome will depend more on what happens on the battlefield than on anything that happens in Washington.”
Most pressing in the Middle East is the escalating conflict involving Israel, Hezbollah, Hamas, and Iran. Ziad Daoud, a senior fellow with the Middle East Initiative at the Kennedy School’s Belfer Center for Science and International Affairs, described a prevailing sense of fury amid another crisis in the Arab world. “This dissatisfaction could multiply if we get a condition in which you have loss of territory in Gaza or in the West Bank or in Lebanon, or you have displacement of people from Gaza or the West Bank or Lebanon,” he said.
For Daoud, the central question is: Which U.S. presidential administration could help secure a favorable resolution for all parties? As a model, he pointed to the “five no’s” offered last year by National Security Council member Brett McGurk: “No forced displacement, no reoccupation, no reduction in territory, no threats to Israel, no besiegement.”
“There are people who say President Trump is averse to war and therefore he’s more likely to end the war,” offered Daoud, who is also the chief emerging markets economist at Bloomberg. “But there are others who say that President Trump may not be as committed to the ‘five no’s’ outlined by the U.S. administration last year. Yes, the ceasefire might be reached but the conditions may not be great, and that might lead to further disruption down the road.”
For countries south of the U.S., the issues of drug trafficking and illegal corridors are far more central than immigration, according the Columbian journalist Diana Durán Nuñez, currently a fellow with Harvard’s Nieman Foundation for Journalism. “If the U.S. really wants to see a different outcome on immigration, they need to raise concern and interest among their partners in Latin America,” she said.
The former TV news reporter also noted the growing influence of China, highlighting Nicaragua and Venezuela as two of the Asian country’s key allies. But the region’s ever-shifting political allegiances could hold new possibility for U.S. interests. “Latin America is a complex compound of left-wing, right-wing, and authoritarian governments,” Durán Nuñez said, pointing to the U.S. military’s new naval base partnership with Argentina’s right-wing president, Javier Milei. “China was supposed to be the ally for that project — there had already been conversations, and they were quite advanced — but Milei stopped it.”
As for China itself, an obvious concern is U.S. trade policy, given Trump’s proposal of a 60 percent tariff on imports from the country. Rana Mitter, the Kennedy School’s S.T. Lee Chair of U.S.-Asia Relations, emphasized that neither U.S. presidential candidate stands for 1990s-style free trade with China. For example, a Harris administration is expected to pursue more restrictions than the Biden administration on tech investments and intellectual property transfers involving China. “But that lies in contrast with what we are promised by people involved with a Trump II administration,” he said.
Mitter, who recently visited the Chinese capital, noted how closely elites there are following the U.S. presidential race. “The only place where I’ve seen as much interest as there is on, say, CNN, is in some of the think tanks of Beijing,” he said. “Many people, very well informed about U.S. politics, wanted to talk about the subject and gave highly granular accounts of voting patterns in various Pennsylvania and Wisconsin counties.”
But these sophisticated observers favor neither candidate. “The majority view,” Mitter said, “was that it might not make that much difference on the grounds that relations between the U.S. and China will be turbulent for quite some time.”
Senior Research Associate Simon Birnbach has been awarded a UK Intelligence Community Postdoctoral Fellowship from the Royal Academy of Engineering, which will support his project on securely fusing cooperative and non-cooperative data for maritime domain awareness.
Kislak Center curator Alicia Meyer is researching a pair of gloves in the Penn Libraries collection rumored to have been William Shakespeare’s, enlisting the help of Tessa Gadomski in the Libraries conservation laboratory to see if the gloves could be from the 1600s.
Researchers from the Critical Analytics for Manufacturing Personalized-Medicine (CAMP) interdisciplinary research group at the Singapore-MIT Alliance for Research and Technology (SMART), MIT’s research enterprise in Singapore, alongside collaborators from the National University of Singapore Tissue Engineering Programme, have developed a novel method to enhance the ability of mesenchymal stromal cells (MSCs) to generate cartilage tissue by adding ascorbic acid during MSC expansion. The research also discovered that micro-magnetic resonance relaxometry (µMRR), a novel process analytical tool developed by SMART CAMP, can be used as a rapid, label-free process-monitoring tool for the quality expansion of MSCs.
Articular cartilage, a connective tissue that protects the bone ends in joints, can degenerate due to injury, age, or arthritis, leading to significant joint pain and disability. Especially in countries — such as Singapore — that have an active, aging population, articular cartilage degeneration is a growing ailment that affects an increasing number of people. Autologous chondrocyte implantation is currently the only Food and Drug Administration-approved cell-based therapy for articular cartilage injuries, but it is costly, time-intensive, and requires multiple treatments. MSCs are an attractive and promising alternative as they have shown good safety profiles for transplantation. However, clinical use of MSCs is limited due to inconsistent treatment outcomes arising from factors such as donor-to-donor variability, variation among cells during cell expansion, and non-standardized MSC manufacturing protocols.
The heterogeneity of MSCs can lead to variations in their biological behavior and treatment outcomes. While large-scale MSC expansions are required to obtain a therapeutically relevant number of cells for implantation, this process can introduce cell heterogeneity. Therefore, improved processes are essential to reduce cell heterogeneity while increasing donor cell numbers with improved chondrogenic potential — the ability of MSCs to differentiate into cartilage cells to repair cartilage tissue — to pave the way for more effective and consistent MSC-based therapies.
In a paper titled “Metabolic modulation to improve MSC expansion and therapeutic potential for articular cartilage repair,” published in the scientific journal Stem Cell Research and Therapy, CAMP researchers detailed their development of a priming strategy to enhance the expansion of quality MSCs by modifying the way cells utilize energy. The research findings have shown a positive correlation between chondrogenic potential and oxidative phosphorylation (OXPHOS), a process that harnesses the reduction of oxygen to create adenosine triphosphate — a source of energy that drives and supports many processes in living cells. This suggests that manipulating MSC metabolism is a promising strategy for enhancing chondrogenic potential.
Using novel PATs developed by CAMP, the researchers explored the potential of metabolic modulation in both short- and long-term harvesting and reseeding of cells. To enhance their chondrogenic potential, they varied the nutrient composition, including glucose, pyruvate, glutamine, and ascorbic acid (AA). As AA is reported to support OXPHOS and its positive impact on chondrogenic potential during differentiation — a process in which immature cells become mature cells with specific functions — the researchers further investigated its effects during MSC expansion.
The addition of AA to cell cultures for one passage during MSC expansion and prior to initiation of differentiation was found to improve chondrogenic differentiation, which is a critical quality attribute (CQA) for better articular cartilage repair. Longer-term AA treatment led to a more than 300-fold increase in the yield of MSCs with enhanced chondrogenic potential, and reduced cell heterogeneity and cell senescence — a process by which a cell ages and permanently stops dividing but does not die — when compared to untreated cells. AA-treated MSCs with improved chondrogenic potential showed a robust shift in metabolic profile to OXPHOS. This metabolic change correlated with μMRR measurements, which helps identify novel CQAs that could be implemented in MSC manufacturing for articular cartilage repair.
The research also demonstrates the potential of the process analytical tool developed by CAMP, micromagnetic resonance relaxometry (μMRR) — a miniature benchtop device that employs magnetic resonance imaging (MRI) imaging on a microscopic scale — as a process-monitoring tool for the expansion of MSCs with AA supplementation. Originally used as a label-free malaria diagnosis method due to the presence of paramagnetic hemozoin particles, μMRR was used in the research to detect senescence in MSCs. This rapid, label-free method requires only a small number of cells for evaluation, which allows for MSC therapy manufacturing in closed systems — a system for protecting pharmaceutical products by reducing contamination risks from the external environment — while enabling intermittent monitoring of a limited lot size per production.
“Donor-to-donor variation, intrapopulation heterogeneity, and cellular senescence have impeded the success of MSCs as a standard of care therapy for articular cartilage repair. Our research showed that AA supplementation during MSC expansion can overcome these bottlenecks and enhance MSC chondrogenic potential,” says Ching Ann Tee, senior postdoc at SMART CAMP and first author of the paper.“By controlling metabolic conditions such as AA supplementation, coupled with CAMP’s process analytical tools such as µMRR, the yield and quality of cell therapy products could be significantly increased. This breakthrough could help make MSC therapy a more effective and viable treatment option and provide standards for improving the manufacturing pipeline.”
“This approach of utilizing metabolic modulation to improve MSC chondrogenic potential could be adapted into similar concepts for other therapeutic indications, such as osteogenic potential for bone repair or other types of stem cells. Implementing our findings in MSC manufacturing settings could be a significant step forward for patients with osteoarthritis and other joint diseases, as we can efficiently produce large quantities of high-quality MSCs with consistent functionality and enable the treatment of more patients,” adds Professor Laurie A. Boyer, principal investigator at SMART CAMP, professor of biology and biological engineering at MIT, and corresponding author of the paper.
The research is conducted by SMART and supported by the National Research Foundation Singapore under its Campus for Research Excellence and Technological Enterprise program.
Weill Cornell Medicine researchers have discovered a mechanism that ovarian tumors use to cripple immune cells – blocking the energy supply T cells depend on. The work points toward a promising new immunotherapy approach for ovarian cancer.
Hospice care aims to provide a health care alternative for people nearing the end of life by sparing them unwanted medical procedures and focusing on the patient’s comfort. A new study co-authored by MIT scholars shows hospice also has a clear fiscal benefit: It generates substantial savings for the U.S. Medicare system.
The study examines the growth of for-profit hospice providers, who receive reimbursements from Medicare, and evaluates the cost of caring for patients with Alzheimer’s disease and related dementias (ADRD). The research finds that for patients using for-profit hospice providers, there is about a $29,000 savings to Medicare over the first five years after someone is diagnosed with ADRD.
“Hospice is saving Medicare a lot of money,” says Jonathan Gruber, an MIT health care economist and co-author of a paper detailing the study’s findings. “Those are big numbers.”
In recent decades, hospice care has grown substantially. That growth has been accompanied by concerns that for-profit hospice organizations, in particular, might be overly aggressive in pursuing patients. There have also been instances of fraud by organizations in the field. And yet, the study shows that the overall dynamics of hospice are the intended ones: People are indeed receiving palliative-type care, based around comfort rather than elaborate medical procedures, at less cost.
“What we found is that hospice basically operates as advertised,” adds Gruber, the Ford Professor of Economics at MIT. “It does not extend lives on aggregate, and it does save money.”
The paper, “Dying or Lying? For-Profit Hospices and End of Life Care,” appears in the American Economic Review. The co-authors are Gruber, who is also head of MIT’s Department of Economics; David Howard, a professor at the Rollins School of Public Health at Emory University; Jetson Leder-Luis PhD ’20, an assistant professor at Boston University; and Theodore Caputi, a doctoral student in MIT’s Department of Economics.
Charting what more hospice access means
Hospice care in the U.S. dates to at least the 1970s. Patients opt out of their existing medical network and receive nursing care where they live, either at home or in care facilities. That care is oriented around reducing suffering and pain, rather than attempting to eliminate underlying causes. Generally, hospice patients are expected to have six months or less to live. Most Medicare funding goes to private contractors supplying medical care, and in the 1980s the federal government started using Medicare to reimburse the medical expenses from hospice as well.
While the number of nonprofit hospice providers in the U.S. has remained fairly consistent, the number of for-profit hospice organizations grew fivefold between 2000 and 2019. Medicare payments for hospice care are now about $20 billion annually, up from $2.5 billion in 1999. People diagnosed with ADRD now make up 38 percent of hospice patients.
Still, Gruber considers the topic of hospice care relatively under-covered by analysts. To conduct the study, the team examined over 10 million patients from 1999 through 2019. The researchers used the growth of for-profit hospice providers to compare the effects of being enrolled in non-profit hospice care, for-profit hospice care, or staying in the larger medical system.
That means the scholars were not only evaluating hospice patients; by evaluating the larger population in a given area where and when for-profit hospice firms opened their doors, they could see what difference greater access to hospice care made. For instance, having a new for-profit hospice open locally is associated with a roughly 2 percentage point increase in for-profit hospice admissions in following years.
“We’re able to use this methodology to [analyze] if these patients would otherwise have not gone to hospice or would have gone to a nonprofit hospice,” Gruber says.
The method also allows the scholars to estimate the substantial cost savings. And it shows that enrolling in hospice increased the five-year post-diagnosis mortality rate of ADRD patients by 8.6 percentage points, from a baseline of 66.6 percent. Entering into hospice care — which is a reversible decision — means foregoing life-extending surgeries, for instance, if people believe such procedures are no longer desirable for them.
Rethinking the cap
By providing care without more expensive medical procedures, it is understandable that hospice reduces overall medical costs. Still, given that Medicare reimburses hospice organizations, one ongoing policy concern is that hospice providers might aggressively recruit a larger percentage of patients who end up living longer than six additional months. In this way hospice providers might unduly boost their revenues and put more pressure on the Medicare budget.
To counteract this, Medicare rules include a roughly $29,205 cap on per-patient reimbursements, as of 2019. Most patients die relatively soon after entering hospice care; some will outlive the six-month expectation significantly. But hospice organizations cannot exceed that average.
However, the study also suggests the cap is a suboptimal approach. In 2018, 15.5 percent of hospice patients were being discharged from hospice care while still alive, due to the cap limiting hospice capacity. As the paper notes, “patients in hospices facing cap pressure are more likely to be discharged from hospice alive and experience higher mortality rates.”
As Gruber notes, the spending cap is partly a fraud-fighting tool. And yet the cap clearly has other, unintended consquences on patients and their medical choices, crowding some out of the hospice system.
“The cap may be throwing the baby out with the bathwater.” Gruber says. “The government has more focused tools to fight fraud. Using the cap for that is a blunt instrument.”
As long as people are informed about hospice and the medical trajectory it puts them on, then, hospice care appears to be providing a valued service at less expense than other approaches to end-of-life care.
“The holy grail in health care is things that improve quality and save money,” Gruber says. “And with hospice, there are surveys saying people like it. And it certainly saves money, and there’s no evidence it’s doing harm [to patients]. We talk about how we struggle to deal with health care costs in this country, so this seems like what we want.”
The research was supported in part by the National Institute on Aging of the National Institutes of Health.
A team led by researchers at MIT has discovered that a distant interstellar cloud contains an abundance of pyrene, a type of large, carbon-containing molecule known as a polycyclic aromatic hydrocarbon (PAH).
The discovery of pyrene in this far-off cloud, which is similar to the collection of dust and gas that eventually became our own solar system, suggests that pyrene may have been the source of much of the carbon in our solar system. That hypothesis is also supported by a recent finding that samples returned from the near-Earth asteroid Ryugu contain large quantities of pyrene.
“One of the big questions in star and planet formation is: How much of the chemical inventory from that early molecular cloud is inherited and forms the base components of the solar system? What we’re looking at is the start and the end, and they’re showing the same thing. That’s pretty strong evidence that this material from the early molecular cloud finds its way into the ice, dust, and rocky bodies that make up our solar system,” says Brett McGuire, an assistant professor of chemistry at MIT.
Due to its symmetry, pyrene itself is invisible to the radio astronomy techniques that have been used to detect about 95 percent of molecules in space. Instead, the researchers detected an isomer of cyanopyrene, a version of pyrene that has reacted with cyanide to break its symmetry. The molecule was detected in a distant cloud known as TMC-1, using the 100-meter Green Bank Telescope (GBT), a radio telescope at the Green Bank Observatory in West Virginia.
McGuire and Ilsa Cooke, an assistant professor of chemistry at the University of British Colombia, are the senior authors of a paper describing the findings, which appears today in Science. Gabi Wenzel, an MIT postdoc in McGuire’s group, is the lead author of the study.
Carbon in space
PAHs, which contain rings of carbon atoms fused together, are believed to store 10 to 25 percent of the carbon that exists in space. More than 40 years ago, scientists using infrared telescopes began detecting features that are thought to belong to vibrational modes of PAHs in space, but this technique couldn’t reveal exactly which types of PAHs were out there.
“Since the PAH hypothesis was developed in the 1980s, many people have accepted that PAHs are in space, and they have been found in meteorites, comets, and asteroid samples, but we can’t really use infrared spectroscopy to unambiguously identify individual PAHs in space,” Wenzel says.
In 2018, a team led by McGuire reported the discovery of benzonitrile — a six-carbon ring attached to a nitrile (carbon-nitrogen) group — in TMC-1. To make this discovery, they used the GBT, which can detect molecules in space by their rotational spectra — distinctive patterns of light that molecules give off as they tumble through space. In 2021, his team detected the first individual PAHs in space: two isomers of cyanonaphthalene, which consists of two rings fused together, with a nitrile group attached to one ring.
On Earth, PAHs commonly occur as byproducts of burning fossil fuels, and they’re also found in char marks on grilled food. Their discovery in TMC-1, which is only about 10 kelvins, suggested that it may also be possible for them to form at very low temperatures.
The fact that PAHs have also been found in meteorites, asteroids, and comets has led many scientists to hypothesize that PAHs are the source of much of the carbon that formed our own solar system. In 2023, researchers in Japan found large quantities of pyrene in samples returned from the asteroid Ryugu during the Hayabusa2 mission, along with smaller PAHs including naphthalene.
That discovery motivated McGuire and his colleagues to look for pyrene in TMC-1. Pyrene, which contains four rings, is larger than any of the other PAHs that have been detected in space. In fact, it’s the third-largest molecule identified in space, and the largest ever detected using radio astronomy.
Before looking for these molecules in space, the researchers first had to synthesize cyanopyrene in the laboratory. The cyano or nitrile group is necessary for the molecule to emit a signal that a radio telescope can detect. The synthesis was performed by MIT postdoc Shuo Zhang in the group of Alison Wendlandt, an MIT associate professor of chemistry.
Then, the researchers analyzed the signals that the molecules emit in the laboratory, which are exactly the same as the signals that they emit in space.
Using the GBT, the researchers found these signatures throughout TMC-1. They also found that cyanopyrene accounts for about 0.1 percent of all the carbon found in the cloud, which sounds small but is significant when one considers the thousands of different types of carbon-containing molecules that exist in space, McGuire says.
“While 0.1 percent doesn’t sound like a large number, most carbon is trapped in carbon monoxide (CO), the second-most abundant molecule in the universe besides molecular hydrogen. If we set CO aside, one in every few hundred or so remaining carbon atoms is in pyrene. Imagine the thousands of different molecules that are out there, nearly all of them with many different carbon atoms in them, and one in a few hundred is in pyrene,” he says. “That is an absolutely massive abundance. An almost unbelievable sink of carbon. It’s an interstellar island of stability.”
Ewine van Dishoeck, a professor of molecular astrophysics at Leiden Observatory in the Netherlands, called the discovery “unexpected and exciting.”
“It builds on their earlier discoveries of smaller aromatic molecules, but to make the jump now to the pyrene family is huge. Not only does it demonstrate that a significant fraction of carbon is locked up in these molecules, but it also points to different formation routes of aromatics than have been considered so far,” says van Dishoeck, who was not involved in the research.
An abundance of pyrene
Interstellar clouds like TMC-1 may eventually give rise to stars, as clumps of dust and gas coalesce into larger bodies and begin to heat up. Planets, asteroids, and comets arise from some of the gas and dust that surround young stars. Scientists can’t look back in time at the interstellar cloud that gave rise to our own solar system, but the discovery of pyrene in TMC-1, along with the presence of large amounts of pyrene in the asteroid Ryugu, suggests that pyrene may have been the source of much of the carbon in our own solar system.
“We now have, I would venture to say, the strongest evidence ever of this direct molecular inheritance from the cold cloud all the way through to the actual rocks in the solar system,” McGuire says.
The researchers now plan to look for even larger PAH molecules in TMC-1. They also hope to investigate the question of whether the pyrene found in TMC-1 was formed within the cold cloud or whether it arrived from elsewhere in the universe, possibly from the high-energy combustion processes that surround dying stars.
The research was funded in part by a Beckman Foundation Young Investigator Award, the Schmidt Futures, the U.S. National Science Foundation, the Natural Sciences and Engineering Research Council of Canada, the Goddard Center for Astrobiology, and the NASA Planetary Science Division Internal Scientist Funding Program.
For many decades, fusion has been touted as the ultimate source of abundant, clean electricity. Now, as the world faces the need to reduce carbon emissions to prevent catastrophic climate change, making commercial fusion power a reality takes on new importance. In a power system dominated by low-carbon variable renewable energy sources (VREs) such as solar and wind, “firm” electricity sources are needed to kick in whenever demand exceeds supply — for example, when the sun isn’t shining or the wind isn’t blowing and energy storage systems aren’t up to the task. What is the potential role and value of fusion power plants (FPPs) in such a future electric power system — a system that is not only free of carbon emissions but also capable of meeting the dramatically increased global electricity demand expected in the coming decades?
Working together for a year-and-a-half, investigators in the MIT Energy Initiative (MITEI) and the MIT Plasma Science and Fusion Center (PSFC) have been collaborating to answer that question. They found that — depending on its future cost and performance — fusion has the potential to be critically important to decarbonization. Under some conditions, the availability of FPPs could reduce the global cost of decarbonizing by trillions of dollars. More than 25 experts together examined the factors that will impact the deployment of FPPs, including costs, climate policy, operating characteristics, and other factors. They present their findings in a new report funded through MITEI and entitled “The Role of Fusion Energy in a Decarbonized Electricity System.”
“Right now, there is great interest in fusion energy in many quarters — from the private sector to government to the general public,” says the study’s principal investigator (PI) Robert C. Armstrong, MITEI’s former director and the Chevron Professor of Chemical Engineering, Emeritus. “In undertaking this study, our goal was to provide a balanced, fact-based, analysis-driven guide to help us all understand the prospects for fusion going forward.” Accordingly, the study takes a multidisciplinary approach that combines economic modeling, electric grid modeling, techno-economic analysis, and more to examine important factors that are likely to shape the future deployment and utilization of fusion energy. The investigators from MITEI provided the energy systems modeling capability, while the PSFC participants provided the fusion expertise.
Fusion technologies may be a decade away from commercial deployment, so the detailed technology and costs of future commercial FPPs are not known at this point. As a result, the MIT research team focused on determining what cost levels fusion plants must reach by 2050 to achieve strong market penetration and make a significant contribution to the decarbonization of global electricity supply in the latter half of the century.
The value of having FPPs available on an electric grid will depend on what other options are available, so to perform their analyses, the researchers needed estimates of the future cost and performance of those options, including conventional fossil fuel generators, nuclear fission power plants, VRE generators, and energy storage technologies, as well as electricity demand for specific regions of the world. To find the most reliable data, they searched the published literature as well as results of previous MITEI and PSFC analyses.
Overall, the analyses showed that — while the technology demands of harnessing fusion energy are formidable — so are the potential economic and environmental payoffs of adding this firm, low-carbon technology to the world’s portfolio of energy options.
Perhaps the most remarkable finding is the “societal value” of having commercial FPPs available. “Limiting warming to 1.5 degrees C requires that the world invest in wind, solar, storage, grid infrastructure, and everything else needed to decarbonize the electric power system,” explains Randall Field, executive director of the fusion study and MITEI’s director of research. “The cost of that task can be far lower when FPPs are available as a source of clean, firm electricity.” And the benefit varies depending on the cost of the FPPs. For example, assuming that the cost of building a FPP is $8,000 per kilowatt (kW) in 2050 and falls to $4,300/kW in 2100, the global cost of decarbonizing electric power drops by $3.6 trillion. If the cost of a FPP is $5,600/kW in 2050 and falls to $3,000/kW in 2100, the savings from having the fusion plants available would be $8.7 trillion. (Those calculations are based on differences in global gross domestic product and assume a discount rate of 6 percent. The undiscounted value is about 20 times larger.)
The goal of other analyses was to determine the scale of deployment worldwide at selected FPP costs. Again, the results are striking. For a deep decarbonization scenario, the total global share of electricity generation from fusion in 2100 ranges from less than 10 percent if the cost of fusion is high to more than 50 percent if the cost of fusion is low.
Other analyses showed that the scale and timing of fusion deployment vary in different parts of the world. Early deployment of fusion can be expected in wealthy nations such as European countries and the United States that have the most aggressive decarbonization policies. But certain other locations — for example, India and the continent of Africa — will have great growth in fusion deployment in the second half of the century due to a large increase in demand for electricity during that time. “In the U.S. and Europe, the amount of demand growth will be low, so it’ll be a matter of switching away from dirty fuels to fusion,” explains Sergey Paltsev, deputy director of the MIT Center for Sustainability Science and Strategy and a senior research scientist at MITEI. “But in India and Africa, for example, the tremendous growth in overall electricity demand will be met with significant amounts of fusion along with other low-carbon generation resources in the later part of the century.”
A set of analyses focusing on nine subregions of the United States showed that the availability and cost of other low-carbon technologies, as well as how tightly carbon emissions are constrained, have a major impact on how FPPs would be deployed and used. In a decarbonized world, FPPs will have the highest penetration in locations with poor diversity, capacity, and quality of renewable resources, and limits on carbon emissions will have a big impact. For example, the Atlantic and Southeast subregions have low renewable resources. In those subregions, wind can produce only a small fraction of the electricity needed, even with maximum onshore wind buildout. Thus, fusion is needed in those subregions, even when carbon constraints are relatively lenient, and any available FPPs would be running much of the time. In contrast, the Central subregion of the United States has excellent renewable resources, especially wind. Thus, fusion competes in the Central subregion only when limits on carbon emissions are very strict, and FPPs will typically be operated only when the renewables can’t meet demand.
An analysis of the power system that serves the New England states provided remarkably detailed results. Using a modeling tool developed at MITEI, the fusion team explored the impact of using different assumptions about not just cost and emissions limits but even such details as potential land-use constraints affecting the use of specific VREs. This approach enabled them to calculate the FPP cost at which fusion units begin to be installed. They were also able to investigate how that “threshold” cost changed with changes in the cap on carbon emissions. The method can even show at what price FPPs begin to replace other specific generating sources. In one set of runs, they determined the cost at which FPPs would begin to displace floating platform offshore wind and rooftop solar.
“This study is an important contribution to fusion commercialization because it provides economic targets for the use of fusion in the electricity markets,” notes Dennis G. Whyte, co-PI of the fusion study, former director of the PSFC, and the Hitachi America Professor of Engineering in the Department of Nuclear Science and Engineering. “It better quantifies the technical design challenges for fusion developers with respect to pricing, availability, and flexibility to meet changing demand in the future.”
The researchers stress that while fission power plants are included in the analyses, they did not perform a “head-to-head” comparison between fission and fusion, because there are too many unknowns. Fusion and nuclear fission are both firm, low-carbon electricity-generating technologies; but unlike fission, fusion doesn’t use fissile materials as fuels, and it doesn’t generate long-lived nuclear fuel waste that must be managed. As a result, the regulatory requirements for FPPs will be very different from the regulations for today’s fission power plants — but precisely how they will differ is unclear. Likewise, the future public perception and social acceptance of each of these technologies cannot be projected, but could have a major influence on what generation technologies are used to meet future demand.
The results of the study convey several messages about the future of fusion. For example, it’s clear that regulation can be a potentially large cost driver. This should motivate fusion companies to minimize their regulatory and environmental footprint with respect to fuels and activated materials. It should also encourage governments to adopt appropriate and effective regulatory policies to maximize their ability to use fusion energy in achieving their decarbonization goals. And for companies developing fusion technologies, the study’s message is clearly stated in the report: “If the cost and performance targets identified in this report can be achieved, our analysis shows that fusion energy can play a major role in meeting future electricity needs and achieving global net-zero carbon goals.”
Researchers from the Critical Analytics for Manufacturing Personalized-Medicine (CAMP) interdisciplinary research group at the Singapore-MIT Alliance for Research and Technology (SMART), MIT’s research enterprise in Singapore, alongside collaborators from the National University of Singapore Tissue Engineering Programme, have developed a novel method to enhance the ability of mesenchymal stromal cells (MSCs) to generate cartilage tissue by adding ascorbic acid during MSC expansion. The research also discovered that micro-magnetic resonance relaxometry (µMRR), a novel process analytical tool developed by SMART CAMP, can be used as a rapid, label-free process-monitoring tool for the quality expansion of MSCs.
Articular cartilage, a connective tissue that protects the bone ends in joints, can degenerate due to injury, age, or arthritis, leading to significant joint pain and disability. Especially in countries — such as Singapore — that have an active, aging population, articular cartilage degeneration is a growing ailment that affects an increasing number of people. Autologous chondrocyte implantation is currently the only Food and Drug Administration-approved cell-based therapy for articular cartilage injuries, but it is costly, time-intensive, and requires multiple treatments. MSCs are an attractive and promising alternative as they have shown good safety profiles for transplantation. However, clinical use of MSCs is limited due to inconsistent treatment outcomes arising from factors such as donor-to-donor variability, variation among cells during cell expansion, and non-standardized MSC manufacturing protocols.
The heterogeneity of MSCs can lead to variations in their biological behavior and treatment outcomes. While large-scale MSC expansions are required to obtain a therapeutically relevant number of cells for implantation, this process can introduce cell heterogeneity. Therefore, improved processes are essential to reduce cell heterogeneity while increasing donor cell numbers with improved chondrogenic potential — the ability of MSCs to differentiate into cartilage cells to repair cartilage tissue — to pave the way for more effective and consistent MSC-based therapies.
In a paper titled “Metabolic modulation to improve MSC expansion and therapeutic potential for articular cartilage repair,” published in the scientific journal Stem Cell Research and Therapy, CAMP researchers detailed their development of a priming strategy to enhance the expansion of quality MSCs by modifying the way cells utilize energy. The research findings have shown a positive correlation between chondrogenic potential and oxidative phosphorylation (OXPHOS), a process that harnesses the reduction of oxygen to create adenosine triphosphate — a source of energy that drives and supports many processes in living cells. This suggests that manipulating MSC metabolism is a promising strategy for enhancing chondrogenic potential.
Using novel PATs developed by CAMP, the researchers explored the potential of metabolic modulation in both short- and long-term harvesting and reseeding of cells. To enhance their chondrogenic potential, they varied the nutrient composition, including glucose, pyruvate, glutamine, and ascorbic acid (AA). As AA is reported to support OXPHOS and its positive impact on chondrogenic potential during differentiation — a process in which immature cells become mature cells with specific functions — the researchers further investigated its effects during MSC expansion.
The addition of AA to cell cultures for one passage during MSC expansion and prior to initiation of differentiation was found to improve chondrogenic differentiation, which is a critical quality attribute (CQA) for better articular cartilage repair. Longer-term AA treatment led to a more than 300-fold increase in the yield of MSCs with enhanced chondrogenic potential, and reduced cell heterogeneity and cell senescence — a process by which a cell ages and permanently stops dividing but does not die — when compared to untreated cells. AA-treated MSCs with improved chondrogenic potential showed a robust shift in metabolic profile to OXPHOS. This metabolic change correlated with μMRR measurements, which helps identify novel CQAs that could be implemented in MSC manufacturing for articular cartilage repair.
The research also demonstrates the potential of the process analytical tool developed by CAMP, micromagnetic resonance relaxometry (μMRR) — a miniature benchtop device that employs magnetic resonance imaging (MRI) imaging on a microscopic scale — as a process-monitoring tool for the expansion of MSCs with AA supplementation. Originally used as a label-free malaria diagnosis method due to the presence of paramagnetic hemozoin particles, μMRR was used in the research to detect senescence in MSCs. This rapid, label-free method requires only a small number of cells for evaluation, which allows for MSC therapy manufacturing in closed systems — a system for protecting pharmaceutical products by reducing contamination risks from the external environment — while enabling intermittent monitoring of a limited lot size per production.
“Donor-to-donor variation, intrapopulation heterogeneity, and cellular senescence have impeded the success of MSCs as a standard of care therapy for articular cartilage repair. Our research showed that AA supplementation during MSC expansion can overcome these bottlenecks and enhance MSC chondrogenic potential,” says Ching Ann Tee, senior postdoc at SMART CAMP and first author of the paper.“By controlling metabolic conditions such as AA supplementation, coupled with CAMP’s process analytical tools such as µMRR, the yield and quality of cell therapy products could be significantly increased. This breakthrough could help make MSC therapy a more effective and viable treatment option and provide standards for improving the manufacturing pipeline.”
“This approach of utilizing metabolic modulation to improve MSC chondrogenic potential could be adapted into similar concepts for other therapeutic indications, such as osteogenic potential for bone repair or other types of stem cells. Implementing our findings in MSC manufacturing settings could be a significant step forward for patients with osteoarthritis and other joint diseases, as we can efficiently produce large quantities of high-quality MSCs with consistent functionality and enable the treatment of more patients,” adds Professor Laurie A. Boyer, principal investigator at SMART CAMP, professor of biology and biological engineering at MIT, and corresponding author of the paper.
The research is conducted by SMART and supported by the National Research Foundation Singapore under its Campus for Research Excellence and Technological Enterprise program.
For many decades, fusion has been touted as the ultimate source of abundant, clean electricity. Now, as the world faces the need to reduce carbon emissions to prevent catastrophic climate change, making commercial fusion power a reality takes on new importance. In a power system dominated by low-carbon variable renewable energy sources (VREs) such as solar and wind, “firm” electricity sources are needed to kick in whenever demand exceeds supply — for example, when the sun isn’t shining or the wind isn’t blowing and energy storage systems aren’t up to the task. What is the potential role and value of fusion power plants (FPPs) in such a future electric power system — a system that is not only free of carbon emissions but also capable of meeting the dramatically increased global electricity demand expected in the coming decades?
Working together for a year-and-a-half, investigators in the MIT Energy Initiative (MITEI) and the MIT Plasma Science and Fusion Center (PSFC) have been collaborating to answer that question. They found that — depending on its future cost and performance — fusion has the potential to be critically important to decarbonization. Under some conditions, the availability of FPPs could reduce the global cost of decarbonizing by trillions of dollars. More than 25 experts together examined the factors that will impact the deployment of FPPs, including costs, climate policy, operating characteristics, and other factors. They present their findings in a new report funded through MITEI and entitled “The Role of Fusion Energy in a Decarbonized Electricity System.”
“Right now, there is great interest in fusion energy in many quarters — from the private sector to government to the general public,” says the study’s principal investigator (PI) Robert C. Armstrong, MITEI’s former director and the Chevron Professor of Chemical Engineering, Emeritus. “In undertaking this study, our goal was to provide a balanced, fact-based, analysis-driven guide to help us all understand the prospects for fusion going forward.” Accordingly, the study takes a multidisciplinary approach that combines economic modeling, electric grid modeling, techno-economic analysis, and more to examine important factors that are likely to shape the future deployment and utilization of fusion energy. The investigators from MITEI provided the energy systems modeling capability, while the PSFC participants provided the fusion expertise.
Fusion technologies may be a decade away from commercial deployment, so the detailed technology and costs of future commercial FPPs are not known at this point. As a result, the MIT research team focused on determining what cost levels fusion plants must reach by 2050 to achieve strong market penetration and make a significant contribution to the decarbonization of global electricity supply in the latter half of the century.
The value of having FPPs available on an electric grid will depend on what other options are available, so to perform their analyses, the researchers needed estimates of the future cost and performance of those options, including conventional fossil fuel generators, nuclear fission power plants, VRE generators, and energy storage technologies, as well as electricity demand for specific regions of the world. To find the most reliable data, they searched the published literature as well as results of previous MITEI and PSFC analyses.
Overall, the analyses showed that — while the technology demands of harnessing fusion energy are formidable — so are the potential economic and environmental payoffs of adding this firm, low-carbon technology to the world’s portfolio of energy options.
Perhaps the most remarkable finding is the “societal value” of having commercial FPPs available. “Limiting warming to 1.5 degrees C requires that the world invest in wind, solar, storage, grid infrastructure, and everything else needed to decarbonize the electric power system,” explains Randall Field, executive director of the fusion study and MITEI’s director of research. “The cost of that task can be far lower when FPPs are available as a source of clean, firm electricity.” And the benefit varies depending on the cost of the FPPs. For example, assuming that the cost of building a FPP is $8,000 per kilowatt (kW) in 2050 and falls to $4,300/kW in 2100, the global cost of decarbonizing electric power drops by $3.6 trillion. If the cost of a FPP is $5,600/kW in 2050 and falls to $3,000/kW in 2100, the savings from having the fusion plants available would be $8.7 trillion. (Those calculations are based on differences in global gross domestic product and assume a discount rate of 6 percent. The undiscounted value is about 20 times larger.)
The goal of other analyses was to determine the scale of deployment worldwide at selected FPP costs. Again, the results are striking. For a deep decarbonization scenario, the total global share of electricity generation from fusion in 2100 ranges from less than 10 percent if the cost of fusion is high to more than 50 percent if the cost of fusion is low.
Other analyses showed that the scale and timing of fusion deployment vary in different parts of the world. Early deployment of fusion can be expected in wealthy nations such as European countries and the United States that have the most aggressive decarbonization policies. But certain other locations — for example, India and the continent of Africa — will have great growth in fusion deployment in the second half of the century due to a large increase in demand for electricity during that time. “In the U.S. and Europe, the amount of demand growth will be low, so it’ll be a matter of switching away from dirty fuels to fusion,” explains Sergey Paltsev, deputy director of the MIT Center for Sustainability Science and Strategy and a senior research scientist at MITEI. “But in India and Africa, for example, the tremendous growth in overall electricity demand will be met with significant amounts of fusion along with other low-carbon generation resources in the later part of the century.”
A set of analyses focusing on nine subregions of the United States showed that the availability and cost of other low-carbon technologies, as well as how tightly carbon emissions are constrained, have a major impact on how FPPs would be deployed and used. In a decarbonized world, FPPs will have the highest penetration in locations with poor diversity, capacity, and quality of renewable resources, and limits on carbon emissions will have a big impact. For example, the Atlantic and Southeast subregions have low renewable resources. In those subregions, wind can produce only a small fraction of the electricity needed, even with maximum onshore wind buildout. Thus, fusion is needed in those subregions, even when carbon constraints are relatively lenient, and any available FPPs would be running much of the time. In contrast, the Central subregion of the United States has excellent renewable resources, especially wind. Thus, fusion competes in the Central subregion only when limits on carbon emissions are very strict, and FPPs will typically be operated only when the renewables can’t meet demand.
An analysis of the power system that serves the New England states provided remarkably detailed results. Using a modeling tool developed at MITEI, the fusion team explored the impact of using different assumptions about not just cost and emissions limits but even such details as potential land-use constraints affecting the use of specific VREs. This approach enabled them to calculate the FPP cost at which fusion units begin to be installed. They were also able to investigate how that “threshold” cost changed with changes in the cap on carbon emissions. The method can even show at what price FPPs begin to replace other specific generating sources. In one set of runs, they determined the cost at which FPPs would begin to displace floating platform offshore wind and rooftop solar.
“This study is an important contribution to fusion commercialization because it provides economic targets for the use of fusion in the electricity markets,” notes Dennis G. Whyte, co-PI of the fusion study, former director of the PSFC, and the Hitachi America Professor of Engineering in the Department of Nuclear Science and Engineering. “It better quantifies the technical design challenges for fusion developers with respect to pricing, availability, and flexibility to meet changing demand in the future.”
The researchers stress that while fission power plants are included in the analyses, they did not perform a “head-to-head” comparison between fission and fusion, because there are too many unknowns. Fusion and nuclear fission are both firm, low-carbon electricity-generating technologies; but unlike fission, fusion doesn’t use fissile materials as fuels, and it doesn’t generate long-lived nuclear fuel waste that must be managed. As a result, the regulatory requirements for FPPs will be very different from the regulations for today’s fission power plants — but precisely how they will differ is unclear. Likewise, the future public perception and social acceptance of each of these technologies cannot be projected, but could have a major influence on what generation technologies are used to meet future demand.
The results of the study convey several messages about the future of fusion. For example, it’s clear that regulation can be a potentially large cost driver. This should motivate fusion companies to minimize their regulatory and environmental footprint with respect to fuels and activated materials. It should also encourage governments to adopt appropriate and effective regulatory policies to maximize their ability to use fusion energy in achieving their decarbonization goals. And for companies developing fusion technologies, the study’s message is clearly stated in the report: “If the cost and performance targets identified in this report can be achieved, our analysis shows that fusion energy can play a major role in meeting future electricity needs and achieving global net-zero carbon goals.”
Your side might lose. But you don’t have to lose your mind.
Christina Pazzanese
Harvard Staff Writer
4 min read
Political engagement is healthy. Doomscrolling? Not so much.
Kamala Harris, the Democratic nominee for president, and Donald Trump, the Republican candidate, are locked in an extremely close race. With Election Day less than two weeks away, many Americans are feeling anxious and overwhelmed.
At the Harvard T.H. Chan School of Public Health on Tuesday, journalist Eugene Scott, currently a Fall 2024 Fellow at the Institute of Politics, asked analysts how people who care about politics and the election outcome can remain engaged without harming their mental health.
John Della Volpe, director of polling at the IOP, which conducts the biannual Harvard Youth Poll, noted stress among 18– to 29-year-olds about the state of the world, along with persistent doubt that current political and economic systems will help them.
“Young people today feel this insecurity and instability about the future,” he said. “They feel like all of the problems that older generations have are trickling down to them.”
Across the political spectrum, voters have expressed frustration and exhaustion, said pollster Kristen Soltis Anderson of Echelon Insights.
“It just feels like the stakes are very high,” she said. “And at the same time, people feel powerless. They feel like those who wish to do them harm have been increasing in their power and ability to harm them, and that’s led some people, particularly those who are consuming the most information about politics, to feel the most anxious” about the consequences the election outcome might mean on their everyday lives.
Whether reality-based or manufactured, fear has long been understood to be an effective turnout tool.
“I think in this election, there’s just a lot of selling fear, and that even people who will say, ‘The other side is out there fearmongering,’ then turn around and do it themselves,” Soltis Anderson said. “They catastrophize what the other side taking power would mean.”
Political anxiety, which psychologists now recognize as a discrete condition, has a significant effect on mental health, which in turn deeply impacts physical health, so, “Yes, this a public health crisis,” said Chris Chanyasulkit, former president of the American Public Health Association.
One simple way to reduce election anxiety is to stop “doomscrolling,” said Chanyasulkit.
“Do not go to bed at night with your phone and reading — it’s terrible. Do not wake up first thing in the morning and reach for your phone to see what’s going on.”
Chris Chanyasulkit
“Do not go to bed at night with your phone and reading — it’s terrible. Do not wake up first thing in the morning and reach for your phone to see what’s going on. None of that is good because you’re getting inundated” with outrage tailored to keep users on these platforms. “That’s so not healthy.”
Political engagement doesn’t inevitably lead to anxiety; it can have positive and empowering impacts on people. Research has linked voting to better mental health and better health outcomes overall, said Danielle Allen, James Bryant Conant University Professor and director of the Allen Lab for Democracy Renovation.
And political engagement can include other activities beyond voting, like protesting or running for office, Allen said. Where those pursuits can have detrimental effects on mental health is when people don’t feel tightly connected to others while participating, she said.
“There is this kind of tension between conflict and connection, and that really, I think, is at the core of the question of how we process political anxiety,” Allen said. “Are we processing it in mainly a conflict mode or are we taking the opportunity of engagement to connect with others and do positive work for our communities?”
Generation Z, a key voting cohort in 2024, faces mental health stressors shaped by damaging events during their lifetimes, like the recession of 2008-2009, school shootings, and COVID-driven social isolation, Della Volpe said.
Because they don’t get news from traditional sources, Gen Zers often don’t hear about the good things government has been able to accomplish, which fuels a cycle of negativity, anxiety, and hopelessness.
“That’s why I think it’s so important … for all of us … to remind younger people that things do get better,” he said. “Things have gotten better because of younger people.”
A.R.T. sells off costumes dating to 1980s, attracting thrifters and theater-lovers alike
Need a last-minute Halloween costume? Or know any local theater-lovers who might want to own a piece of history? The American Repertory Theater is holding its first-ever public costume sale — selling off pieces from its inventory dating back to the 1980s. The sale, which started last week, runs through Saturday. Prices range from $1 to $50.
“I think people who like to vintage shop will find a treasure trove of things here,” said Alycia Marucci, the A.R.T.’s wardrobe manager. “But I also think that a theater-lover might want to look through the labels, because you can see the history of a garment.”
Many of the clothes for sale contain custom labels noting the show a costume was used for, and sometimes the name of the actor who wore it. Also for sale: a plethora of items that did not make the cut for a production. Some pieces, sourced by wardrobe designers from contemporary stores, still have the tags on them.
“We have all kinds of different makers, local and not local. And then we do productions like ‘Life of Pi,’ where a lot of things were sourced internationally,” Marucci said. “So we have every sort of option depending on the show that’s coming through.”
An impending renovation to the A.R.T.’s storage space at Fawcett Street sparked the sale, Marucci said.
“We realized how much stuff we really have, and we realized we really don’t have the labor or the resources to go through every garment and clean it and make sure it’s ready and size it and do all those things,” she said. “So the first thing we wanted to do was offer it up to people who might enjoy it or who might use it before we would.”
Alissa Cardone, an associate professor of dance at the Boston Conservatory at Berklee, was one of those people.
“I’m always looking for things,” she said. “And I need old underwear and pointy boots for a piece I’m working on.”
Holding two long coats in her arms, she said she also found some pieces for herself. She said she heard about the sale from a friend who told her not to miss it.
“A lot of people appreciate old versus new.”
Amanda Marcus, a Cambridge resident on the A.R.T. mailing list, was drawn to the many hats on sale.
“Some of the shows I recognize — ‘Gatsby’ was the most recent one,” she said. “And I saw a couple of ‘Gatsby’ hats in there already.”
Marcus was accompanied by Alasdair Post-Quinn, who added that the pair is always in the market for unique costume items.
“We are Burning Man people,” he said, referring to the annual music festival. “We both modify our own clothes by sewing and adding things on, adding lights.”
The shop will reopen from 10 a.m. to 5 p.m. Friday and Saturday at the A.R.T. Scene Shop at 155 Fawcett St. in Cambridge.
A team led by ETH climate researcher Sandro Vattioni has shown that diamond dust released in the atmosphere could be a good way to cool the climate. However, it is still not a sustainable solution to climate change, says Vattioni in an interview with ETH News.
Applied Materials South East Asia Pte. Ltd. and the National University of Singapore (NUS) are furthering their collaboration to bring advanced semiconductor research capabilities and talent development opportunities to Singapore. Supported by the National Research Foundation (NRF) under the Research, Innovation and Enterprise 2025 (RIE2025) plan, the Applied Materials-NUS Advanced Materials Corporate Lab – established in 2018 and located on the NUS Kent Ridge campus – will be expanded with state-of-the-art semiconductor process equipment in a larger, more advanced cleanroom. In addition, Applied Materials and NUS are collaborating on programmes designed to strengthen Singapore’s talent pipeline.
Mr Heng Swee Keat, Deputy Prime Minister and Chairman of NRF, was the Guest-of-Honour at a ceremony held today at NUS marking the new phase of the Corporate Lab. Guests from the industry, local research ecosystem and government agencies attended the event.
“When NUS and Applied Materials first established the Corporate Lab six years ago, we laid the foundation for a collaboration that has since yielded remarkable success,” said NUS President Professor Tan Eng Chye. “Several of the innovations developed here have progressed from the research stage to the scale-up phase, paving the way for real-world applications that can benefit society. We are very excited to embark on a new chapter in our collaboration with Applied Materials to further advance semiconductor science and technology and inspire the next generation of innovators who will push the envelope and break new ground in this significant field.”
“The Advanced Materials Corporate Lab at NUS is a prime example of how industry-academia collaboration can accelerate the discovery and transition of innovations into commercial applications,” said Dr Satheesh Kuppurao, Group Vice President of Business Development and Growth, Semiconductor Products Group at Applied Materials, Inc. “Our joint work has resulted in numerous patents related to chemistry, semiconductor process and hardware design solutions, along with several sponsored scholarships. Applied Materials is excited to build on our success with NUS and bring enhanced semiconductor research and talent development opportunities to Singapore.”
Hosted at the College of Design and Engineering and the Faculty of Science at NUS, the Applied Materials-NUS Advanced Materials Corporate Lab offers world-class, multi-disciplinary R&D capabilities that span applied chemistry, materials science and microelectronics process engineering. The goal of the Corporate Lab is to accelerate discovery of new materials that can be quickly transferred into commercial applications for manufacturing future generations of semiconductors.
The second phase of the Corporate Lab will elevate the well-established microelectronics research capabilities at NUS to new heights by fostering innovation, accelerating the development of cutting-edge technologies and expanding interdisciplinary collaboration. It will include a new cleanroom in NUS with state-of-the-art materials synthesis and characterisation capabilities. Utilising these enhanced capabilities, Applied Materials and NUS will focus on developing industry-scale solutions to complex semiconductor manufacturing challenges, with an emphasis on integrated processes and interface engineering.
Along with the new phase of the Corporate Lab, the Applied Materials Professorship has been established at NUS to attract experts in semiconductors, materials science and other technology fields.In addition, the enhanced capabilities at the Corporate Lab will introduce new educational and talent development opportunities for undergraduates, postgraduates and professionals in the areas of microelectronics, advanced materials and process engineering. This will ensure the University’s sustained academic leadership within these critical fields and contribute to Singapore’s overall economic growth and development.
For Applied Materials, the latest phase of its collaboration with NUS is part of the company’s “Singapore 2030” plan to strengthen its manufacturing capacity, R&D capabilities, technology ecosystem partnerships and workforce development in Singapore.
Latest research has revealed a “positive association” between the number of properties listed as Airbnb rentals and police-reported robberies and violent crimes in thousands of London neighbourhoods between 2015 and 2018.
In fact, the study led by the University of Cambridge suggests that a 10% increase in active Airbnb rentals in the city would correspond to an additional 1,000 robberies per year across London.*
Urban sociologists say the rapid pace at which crime rises in conjunction with new rentals suggests that the link is related more to opportunities for crime, rather than loss of cohesion within communities – although both are likely contributing factors.
“We tested for the most plausible alternative explanations, from changes in police patrols to tourist hotspots and even football matches,” said Dr Charles Lanfear from Cambridge’s Institute of Criminology, co-author of the study published today in the journal Criminology.
“Nothing changed the core finding that Airbnb rentals are related to higher crime rates in London neighbourhoods.”
“While Airbnb offers benefits to tourists and hosts in terms of ease and financial reward, there may be social consequences to turning large swathes of city neighbourhoods into hotels with little regulation,” Lanfear said.
Founded in 2008, Airbnb is a giant of the digital economy, with more than five million property hosts now active on the platform in some 100,000 cities worldwide.
However, concerns that Airbnb is contributing to unaffordable housing costs has led to a backlash among residents of cities such as Barcelona, and calls for greater regulation.
London is one of the most popular Airbnb markets in the world. An estimated 4.5 million guests stayed in a London Airbnb during the period covered by the study.
Lanfear and his University of Pennsylvania co-author Prof David Kirk used masses of data from AirDNA: a site that scrapes Airbnb to provide figures, trends and approximate geolocations for the short-term letting market.
They mapped AirDNA data from 13 calendar quarters (January 2015 to March 2018) onto “Lower Layer Super Output Areas”, or LSOAs.
These are designated areas of a few streets containing around two thousand residents, used primarily for UK census purposes. There are 4,835 LSOAs in London, and all were included in the study.
Crime statistics from the UK Home Office and Greater London Authority for six categories – robbery, burglary, theft, anti-social behaviour, any violence, and bodily harm – were then mapped onto LSOAs populated with AirDNA data.
The researchers analysed all forms of Airbnb lets, but found the link between active Airbnbs and crime is primarily down to entire properties for rent, rather than spare or shared rooms.
The association between active Airbnb rentals and crime was most significant for robbery and burglary, followed by theft and any violence. No link was found for anti-social behaviour and bodily harm.
On average across London, an additional Airbnb property was associated with a 2% increase in the robbery rate within an LSOA. This association was 1% for thefts, 0.9% for burglaries, and 0.5% for violence.
“While the potential criminogenic effect for each Airbnb rental is small, the accumulative effect of dozens in a neighbourhood, or tens of thousands across the city, is potentially huge,” Lanfear said.
He points out that London had an average of 53,000 active lettings in each calendar-quarter of the study period, and an average of 11 lettings per LSOA.
At its most extreme, one neighbourhood in Soho, an area famed for nightlife, had a high of 318 dedicated Airbnbs – some 30% of all households in the LSOA.
The data models suggest that a 3.2% increase in all types of Airbnb rentals per LSOA would correspond to a 1% increase in robberies city-wide: 325 additional robberies based on the figure of 32,500 recorded robberies in London in 2018.
Lanfear and Kirk extensively stress-tested the association between Airbnb listings and London crime rates.
This included factoring in “criminogenic variables” such as property prices, police stops, the regularity of police patrols, and even English Premier League football games (by both incorporating attendance into data modelling, and removing all LSOAs within a kilometre of major games).
The duo re-ran their data models excluding all the 259 LSOAs in central London’s Zone One, to see if the association was limited to high tourism areas with lots of Airbnb listings. The data models even incorporated the seasonal “ebb and flow” of London tourism. Nothing changed the overall trends.
Prior to crunching the numbers, the researchers speculated that any link might be down to Airbnbs affecting “collective efficacy”: the social cohesion within a community, combined with a willingness to intervene for the public good.
The study measured levels of ‘collective efficacy’ across the city using data from both the Metropolitan Police and the Mayor of London’s Office, who conduct surveys on public perceptions of criminal activity and the likely responses of their community.
Collective efficacy across London is not only consistently high, but did not explain the association between Airbnbs and crime in the data models.
Moreover, when Airbnb listings rise, the effect on crime is more immediate than one caused by a slow erosion of collective efficacy. “Crime seems to go up as soon as Airbnbs appear, and stays elevated for as long as they are active,” said Lanfear.
The researchers conclude it is likely driven by criminal opportunity. “A single Airbnb rental can create different types of criminal opportunity,” said Lanfear.
“An Airbnb rental can provide an easy potential victim such as a tourist unfamiliar with the area, or a property that is regularly vacant and so easier to burgle. A very temporary occupant may be more likely to cause criminal damage.”
“Offenders may learn to return to areas with more Airbnbs to find unguarded targets,” said Lanfear. “More dedicated Airbnb properties may mean fewer long-term residents with a personal stake in the area who are willing to report potential criminal activity.”
Airbnb has taken steps to prevent crime, including some background checks as well as requirements for extended bookings on occasions popular for one-night parties, such as New Year’s Eve. “The fact that we still find an increase in crime despite Airbnb’s efforts to curtail it reveals the severity of the predicament,” said Kirk.
Added Lanfear: “Short-term letting sites such as Airbnb create incentives for landlords that lead to property speculation, and we can see the effect on urban housing markets. We can now see that the expansion of Airbnb may contribute to city crime rates.”
“It is not the company or even the property owners who experience the criminogenic side effects of Airbnb, it is the local residents building their lives in the neighbourhood.”
Notes:
*Above 2018 levels, which is when the study data ends.
Rising numbers of houses and flats listed as short-term lets on Airbnb are associated with higher rates of crimes such as burglaries and street robberies right across London, according to the most detailed study of its kind.
There may be social consequences to turning large swathes of city neighbourhoods into hotels with little regulation
A sustainable transition to a climate-friendly and biodiversity-rich Switzerland is only possible if we tackle the energy transition, climate change mitigation and biodiversity loss together. This will not be easy, but it is worthwhile and ultimately indispensable, says Reto Knutti.
Educator and food historian Mr Khir Johari, whose work, The Food of Singapore Malays: Gastronomic Travels through the Archipelago (Singapore: Marshall Cavendish, 2021) profoundly reshapes our understanding of the gastronomy and cultural history of Singapore Malays, has been awarded the 2024 NUS Singapore History Prize. Mr Khir Johari will receive a cash award of S$50,000.
Created in 2014 in support of the national SG50 programme to celebrate the 50th anniversary of Singapore’s independence, the NUS Singapore History Prize is awarded to an outstanding publication that has made a lasting impact on our understanding of the history of Singapore, and that is accessible to a wide audience of specialist and non-specialist readers.
A five-member Jury Panel chaired by Mr Kishore Mahbubani, Distinguished Fellow at the NUS Asia Research Institute, selected the winning work from a short list of six works, itself culled from a total of 26 submitted works authored by local and international scholars. The other Jury Panel members are: Emeritus Professor John N. Miksic of the NUS Department of Southeast Asian Studies; Professor Tan Tai Yong, President of the Singapore University of Social Sciences; Professor Peter A. Coclanis, Director, Global Research Institute, the University of North Carolina, Chapel Hill; and economist Dr Lam San Ling.
The five books that were shortlisted alongside the winning publication are:
Wesley Leon Aroozoo, The Punkhawala and the Prostitute (Singapore: Epigram Books, 2021).
Timothy P. Barnard, ed., Singaporean Creatures: Histories of Humans and Other Animals in the Garden City (Singapore: NUS Press, 2024).
Kevin Blackburn, The Comfort Women of Singapore in History and Memory (Singapore: NUS Press, 2022).
Loh Kah Seng, Alex Tan Tiong Hee, Koh Keng We, Tan Teng Phee, and Juria Toramae, Theatres of Memory: Industrial Heritage of 20th Century Singapore (Singapore: Pagesetters Services, 2021).
Lynn Wong Yuqing and Lee Kok Leong, Reviving Qixi: Singapore’s Forgotten Seven Sisters Festival (Singapore: Renforest Publishing, 2022).
Of the five books, the Jury Panel also highlighted two that deserve special commendation and recognition. They are, ranked in order of priority, Reviving Qixi: Singapore’s Forgotten Seven Sisters Festival by Lynn Wong Yuqing and Lee Kok Leong; and Theatres of Memory: Industrial Heritage of 20th Century Singapore by Loh Kah Seng, Alex Tan Tiong Hee, Koh Keng We, Tan Teng Phee, and Juria Toramae. The Jury Panel found the two books compelling and riveting: one offers new insights into a forgotten festival celebrated by the Chinese community in Singapore while the other delves into the understudied labour and industrial history of Singapore.
Mr Mahbubani, Chair of the NUS Singapore History Prize Jury Panel, said: “Southeast Asia is a magical place. At a time when many regions are suffering conflict, tension and stagnation, Southeast Asia remains an oasis of peace and prosperity, despite its incredible diversity. Why? The deeper and longer history of the region may explain this. Khir Johari’s book is a deserving winner of the Singapore History Prize because it sheds new light on our history. Few Singaporeans know that over a hundred years ago, Singapore had already emerged as ‘the New York of the Nusantara.’ This book will open their eyes to Singapore’s long and rich involvement with its surrounding region. And it is a truly beautifully produced book that will enchant its readers.”
The Food of Singapore Malays: Gastronomic Travels through the Archipelago
The Food of Singapore Malays: Gastronomic Travels through the Archipelago is an unwritten story of a people. Between the vast Indian and Pacific oceans lies the Malay Archipelago, known widely as the Nusantara, which has nourished the lives of indigenous Malays throughout the centuries and nurtured the diverse peoples that have set foot on their shores. Today, the Malays make up less than a fifth of the population in Singapore, a city with ancient ties to the Malay world.
This book explores their food, not just as a means of sustenance but as a cultural activity. Inheriting the Nusantara's rich flavours, Singapore Malays have a grand culinary heritage reflecting their worldviews, social values and historical interactions with other cultures. Through close examination of their daily objects, customs, art and literature, these pages reveal how the food Malays enjoy is deeply embedded in different aspects of their identity.
Following the broad sweep of Malay cuisine's evolution – from the 7th-century kingdom of Srivijaya to the 21st-century emporium of cosmopolitan Singapore – this book traces the continuity and dynamism of a shared cultural consciousness. Sumptuously served with stunning photographs, delicious recipes and diligent research, this is essential reading for anyone – gourmets and amateurs alike – hungry for a deeper understanding of the relationship between people and their food.
Please refer to the Annex for the citation on the winning work by Khir Johari, along with the two books receiving special commendations.
About Khir Johari
Khir Johari is the author of the award-winning book, The Food of the Singapore Malays: Gastronomic Travels Through the Archipelago. which has received widespread acclaim, including Singapore’s Book of the Year 2022, Gourmand World Food Culture Award’s Best of the Best Book 2023, and its prestigious Best of the Last 25 Years. Following this success, Khir founded “Dialogues by Khir Johari,” a platform dedicated to exploring Nusantara’s gastronomy through events and online discussions. Its inaugural event was a symposium titled Serumpun: Tasting Tradition, Telling Tales.
Aside from his literary achievements, Khir is an avid art collector and independent researcher specialising in the history and heritage of maritime Southeast Asia. He holds a Bachelor of Science degree in Mathematics from Santa Clara University and a Masters in Education from Stanford University. He serves as a board member of the Asian Civilisations Museum Singapore.
Born and raised in Kampong Gelam, Khir was immersed in the diverse Nusantara culinary traditions from a young age, learning from both his family and the larger vibrant communities of this historic district.
Khir Johari remarked: “I am touched and humbled by this recognition. It is an honour to receive this NUS Singapore History Prize among such a distinguished list of writers. My hope is that we continue the important work of preserving and celebrating our rich culinary heritage. Our shared cultural roots are an essential starting point for understanding how food connects us across generations and borders.”
“When I set out on this book project 14 years ago, my aim was to document our nation’s first cuisine. What started as a chronicle of food culture evolved into a celebration of our custodians of gastronomic knowledge and wisdom. This book is a tribute to the fishermen, farmers, hawkers, smiths who produced our kitchen accoutrements, as well as cookbook writers, cookery teachers, homemakers and more.”
“My wish for this book is that it answers the question of why we eat what we eat as a people. I also hope it serves as a reminder that Singapore has always been an important node in a larger interconnected network. Indeed, Singapore can be aptly regarded as the New York of the Nusantara for its role and contributions in shipping, trade, publishing and performing arts of the region.”
The NUS Singapore History Prize
Mooted by Mr Mahbubani, the NUS Singapore History Prize aims to stimulate an engagement with Singapore’s history broadly understood (this might include pre-1819) and works dealing with Singapore’s place in the world. Another purpose is to make the complexities and nuances of Singapore’s history more accessible to non-academic audiences and to cast a wide net for consideration of works that deal with history. At the same time, the Prize hopes to generate a greater understanding among Singaporean citizens of their own unique history.
The Prize is an open global competition and is administered by the Department of History at the NUS Faculty of Arts and Social Sciences. The 2024 Prize was open to works in English (written or translated) published between 1 June 2021 and 31 May 2024. Non-fiction and fiction works were eligible for the Prize. Other creative works that have clear historical themes could also be submitted. Book-length works that were either authored or co-authored, and addressed any time period, theme, or field of Singaporean history, or include a substantial aspect of Singaporean history as part of a wider story were eligible.
The Prize is awarded every three years, and the author of the winning publication will receive a cash award of S$50,000. The inaugural Prize was awarded in 2018 to Professor John Miksic, whose work Singapore and the Silk Road of the Sea, 1300–1800 provides detailed archaeological evidence that Singapore’s story began more than 700 years ago. In 2021, the Prize was awarded to Hidayah Amin for her book Leluhur: Singapore’s Kampong Gelam which presents the history of Kampong Gelam in the context of changes to Singapore’s economic, political, and social history over the last 200 years.
Enquires about the next round of the NUS Singapore History Prize, which will open for nominations in due course, and be awarded in 2027, should be addressed to hisprize@nus.edu.sg.
Trailblazing women from the University of Melbourne, including Professor Jane Gunn AO, Associate Professor Ada Cheung and medical student Aayushi Khillan, have been inducted into the Victorian Honour Roll of Women in recognition of their outstanding contributions to health and research.
In a first for both universities, MIT undergraduates are engaged in research projects at the Universidad del Valle de Guatemala (UVG), while MIT scholars are collaborating with UVG undergraduates on in-depth field studies in Guatemala.
These pilot projects are part of a larger enterprise, called ASPIRE (Achieving Sustainable Partnerships for Innovation, Research, and Entrepreneurship). Funded by the U.S. Agency for International Development, this five-year, $15-million initiative brings together MIT, UVG, and the Guatemalan Exporters Association to promote sustainable solutions to local development challenges.
“This research is yielding insights into our understanding of how to design with and for marginalized people, specifically Indigenous people,” says Elizabeth Hoffecker, co-principal investigator of ASPIRE at MIT and director of the MIT Local Innovation Group.
The students’ work is bearing fruit in the form of publications and new products — directly advancing ASPIRE’s goals to create an innovation ecosystem in Guatemala that can be replicated elsewhere in Central and Latin America.
For the students, the project offers rewards both tangible and inspirational.
“My experience allowed me to find my interest in local innovation and entrepreneurship,” says Ximena Sarmiento García, a fifth-year undergraduate at UVG majoring in anthropology. Supervised by Hoffecker, Sarmiento García says, “I learned how to inform myself, investigate, and find solutions — to become a researcher.”
Sandra Youssef, a rising junior in mechanical engineering at MIT, collaborated with UVG researchers and Indigenous farmers to design a mobile cart to improve the harvest yield of snow peas. “It was perfect for me,” she says. “My goal was to use creative, new technologies and science to make a dent in difficult problems.”
Remote and effective
Kendra Leith, co-principal investigator of ASPIRE, and associate director for research at MIT D-Lab, shaped the MIT-based undergraduate research opportunities (UROPs) in concert with UVG colleagues. “Although MIT students aren’t currently permitted to travel to Guatemala, I wanted them to have an opportunity to apply their experience and knowledge to address real-world challenges,” says Leith. “The Covid pandemic prepared them and their counterparts at UVG for effective remote collaboration — the UROPs completed remarkably productive research projects over Zoom and met our goals for them.”
MIT students participated in some of UVG’s most ambitious ASPIRE research. For instance, Sydney Baller, a rising sophomore in mechanical engineering, joined a team of Indigenous farmers and UVG mechanical engineers investigating the manufacturing process and potential markets for essential oils extracted from thyme, rosemary, and chamomile plants.
“Indigenous people have thousands of years working with plant extracts and ancient remedies,” says Baller. “There is promising history there that would be important to follow up with more modern research.”
Sandra Youssef used computer-aided design and manufacturing to realize a design created in a hackathon by snow pea farmers. “Our cart had to hold 495 pounds of snow peas without collapsing or overturning, navigate narrow paths on hills, and be simple and inexpensive to assemble,” she says. The snow pea producers have tested two of Youssef’s designs, built by a team at UVG led by Rony Herrarte, a faculty member in the department of mechanical engineering.
From waste to filter
Two MIT undergraduates joined one of UVG’s long-standing projects: addressing pollution in Guatemala’s water. The research seeks to use chitosan molecules, extracted from shrimp shells, for bioremediation of heavy metals and other water contaminants. These shells are available in abundance, left as waste by the country’s shrimp industry.
Sophomores Ariana Hodlewsky, majoring in chemical engineering, and Paolo Mangiafico, majoring in brain and cognitive sciences, signed on to work with principal investigator and chemistry department instructor Allan Vásquez (UVG) on filtration systems utilizing chitosan.
“The team wants to find a cost-effective product rural communities, most at risk from polluted water, can use in homes or in town water systems,” says Mangiafico. “So we have been investigating different technologies for water filtration, and analyzing the Guatemalan and U.S. markets to understand the regulations and opportunities that might affect introduction of a chitosan-based product.”
“Our research into how different communities use water and into potential consumers and pitfalls sets the scene for prototypes UVG wants to produce,” says Hodlewsky.
Lourdes Figueroa, UVG ASPIRE project manager for technology transfer, found their assistance invaluable.
“Paolo and Ariana brought the MIT culture and mindset to the project,” she says. “They wanted to understand not only how the technology works, but the best ways of getting the technology out of the lab to make it useful.”
This was an “Aha!” moment, says Figueroa. “The MIT students made a major contribution to both the engineering and marketing sides by emphasizing that you have to think about how to guarantee the market acceptance of the technology while it is still under development.”
Innovation ecosystems
UVG’s three campuses have served as incubators for problem-solving innovation and entrepreneurship, in many cases driven by students from Indigenous communities and families. In 2022, Elizabeth Hoffecker, with eight UVG anthropology majors, set out to identify the most vibrant examples of these collaborative initiatives, which ASPIRE seeks to promote and replicate.
Hoffecker’s “innovation ecosystem diagnostic” revealed a cluster of activity centered on UVG’s Altiplano campus in the central highlands, which serves Mayan communities. Hoffecker and two of the anthropology students focused on four examples for a series of case studies, which they are currently preparing for submission to a peer-reviewed journal.
“The caliber of their work was so good that it became clear to me that we could collaborate on a paper,” says Hoffecker. “It was my first time publishing with undergraduates.”
The researchers’ cases included novel production of traditional thread, and creation of a 3D phytoplankton kit that is being used to educate community members about water pollution in Lake Atitlán, a tourist destination that drives the local economy but is increasingly being affected by toxic algae blooms. Hoffecker singles out a project by Indigenous undergraduates who developed play-based teaching tools for introducing basic mathematical concepts.
“These connect to local Mayan ways of understanding and offer a novel, hands-on way to strengthen the math teaching skills of local primary school teachers in Indigenous communities,” says Hoffecker. “They created something that addresses a very immediate need in the community — lack of training.
Both of Hoffecker’s undergraduate collaborators are writing theses inspired by these case studies.
“My time with Elizabeth allowed me to learn how to conduct research from scratch, ask for help, find solutions, and trust myself,” says Sarmiento García. She finds the ASPIRE approach profoundly appealing. “It is not only ethical, but also deeply committed to applying results to the real lives of the people involved.”
“This experience has been incredibly positive, validating my own ability to generate knowledge through research, rather than relying only on established authors to back up my arguments,” says Camila del Cid, a fifth-year anthropology student. “This was empowering, especially as a Latin American researcher, because it emphasized that my perspective and contributions are important.”
Hoffecker says this pilot run with UVG undergrads produced “high-quality research that can inform evidence-based decision-making on development issues of top regional priority” — a key goal for ASPIRE. Hoffecker plans to “develop a pathway that other UVG students can follow to conduct similar research.”
MIT undergraduate research will continue. “Our students’ activities have been very valuable in Guatemala, so much so that the snow pea, chitosan, and essential oils teams would like to continue working with our students this year,” says Leith. She anticipates a new round of MIT UROPs for next summer.
Youssef, for one, is eager to get to work on refining the snow pea cart. “I like the idea of working outside my comfort zone, thinking about things that seem unsolvable and coming up with a solution to fix some aspect of the problem,” she says.
As society grapples with the impacts of a worsening climate—from the increased frequency and intensity of extreme weather events to rising sea levels and deadly heat waves—the need for actionable solutions has never been greater, Penn researchers say.
In a first for both universities, MIT undergraduates are engaged in research projects at the Universidad del Valle de Guatemala (UVG), while MIT scholars are collaborating with UVG undergraduates on in-depth field studies in Guatemala.
These pilot projects are part of a larger enterprise, called ASPIRE (Achieving Sustainable Partnerships for Innovation, Research, and Entrepreneurship). Funded by the U.S. Agency for International Development, this five-year, $15-million initiative brings together MIT, UVG, and the Guatemalan Exporters Association to promote sustainable solutions to local development challenges.
“This research is yielding insights into our understanding of how to design with and for marginalized people, specifically Indigenous people,” says Elizabeth Hoffecker, co-principal investigator of ASPIRE at MIT and director of the MIT Local Innovation Group.
The students’ work is bearing fruit in the form of publications and new products — directly advancing ASPIRE’s goals to create an innovation ecosystem in Guatemala that can be replicated elsewhere in Central and Latin America.
For the students, the project offers rewards both tangible and inspirational.
“My experience allowed me to find my interest in local innovation and entrepreneurship,” says Ximena Sarmiento García, a fifth-year undergraduate at UVG majoring in anthropology. Supervised by Hoffecker, Sarmiento García says, “I learned how to inform myself, investigate, and find solutions — to become a researcher.”
Sandra Youssef, a rising junior in mechanical engineering at MIT, collaborated with UVG researchers and Indigenous farmers to design a mobile cart to improve the harvest yield of snow peas. “It was perfect for me,” she says. “My goal was to use creative, new technologies and science to make a dent in difficult problems.”
Remote and effective
Kendra Leith, co-principal investigator of ASPIRE, and associate director for research at MIT D-Lab, shaped the MIT-based undergraduate research opportunities (UROPs) in concert with UVG colleagues. “Although MIT students aren’t currently permitted to travel to Guatemala, I wanted them to have an opportunity to apply their experience and knowledge to address real-world challenges,” says Leith. “The Covid pandemic prepared them and their counterparts at UVG for effective remote collaboration — the UROPs completed remarkably productive research projects over Zoom and met our goals for them.”
MIT students participated in some of UVG’s most ambitious ASPIRE research. For instance, Sydney Baller, a rising sophomore in mechanical engineering, joined a team of Indigenous farmers and UVG mechanical engineers investigating the manufacturing process and potential markets for essential oils extracted from thyme, rosemary, and chamomile plants.
“Indigenous people have thousands of years working with plant extracts and ancient remedies,” says Baller. “There is promising history there that would be important to follow up with more modern research.”
Sandra Youssef used computer-aided design and manufacturing to realize a design created in a hackathon by snow pea farmers. “Our cart had to hold 495 pounds of snow peas without collapsing or overturning, navigate narrow paths on hills, and be simple and inexpensive to assemble,” she says. The snow pea producers have tested two of Youssef’s designs, built by a team at UVG led by Rony Herrarte, a faculty member in the department of mechanical engineering.
From waste to filter
Two MIT undergraduates joined one of UVG’s long-standing projects: addressing pollution in Guatemala’s water. The research seeks to use chitosan molecules, extracted from shrimp shells, for bioremediation of heavy metals and other water contaminants. These shells are available in abundance, left as waste by the country’s shrimp industry.
Sophomores Ariana Hodlewsky, majoring in chemical engineering, and Paolo Mangiafico, majoring in brain and cognitive sciences, signed on to work with principal investigator and chemistry department instructor Allan Vásquez (UVG) on filtration systems utilizing chitosan.
“The team wants to find a cost-effective product rural communities, most at risk from polluted water, can use in homes or in town water systems,” says Mangiafico. “So we have been investigating different technologies for water filtration, and analyzing the Guatemalan and U.S. markets to understand the regulations and opportunities that might affect introduction of a chitosan-based product.”
“Our research into how different communities use water and into potential consumers and pitfalls sets the scene for prototypes UVG wants to produce,” says Hodlewsky.
Lourdes Figueroa, UVG ASPIRE project manager for technology transfer, found their assistance invaluable.
“Paolo and Ariana brought the MIT culture and mindset to the project,” she says. “They wanted to understand not only how the technology works, but the best ways of getting the technology out of the lab to make it useful.”
This was an “Aha!” moment, says Figueroa. “The MIT students made a major contribution to both the engineering and marketing sides by emphasizing that you have to think about how to guarantee the market acceptance of the technology while it is still under development.”
Innovation ecosystems
UVG’s three campuses have served as incubators for problem-solving innovation and entrepreneurship, in many cases driven by students from Indigenous communities and families. In 2022, Elizabeth Hoffecker, with eight UVG anthropology majors, set out to identify the most vibrant examples of these collaborative initiatives, which ASPIRE seeks to promote and replicate.
Hoffecker’s “innovation ecosystem diagnostic” revealed a cluster of activity centered on UVG’s Altiplano campus in the central highlands, which serves Mayan communities. Hoffecker and two of the anthropology students focused on four examples for a series of case studies, which they are currently preparing for submission to a peer-reviewed journal.
“The caliber of their work was so good that it became clear to me that we could collaborate on a paper,” says Hoffecker. “It was my first time publishing with undergraduates.”
The researchers’ cases included novel production of traditional thread, and creation of a 3D phytoplankton kit that is being used to educate community members about water pollution in Lake Atitlán, a tourist destination that drives the local economy but is increasingly being affected by toxic algae blooms. Hoffecker singles out a project by Indigenous undergraduates who developed play-based teaching tools for introducing basic mathematical concepts.
“These connect to local Mayan ways of understanding and offer a novel, hands-on way to strengthen the math teaching skills of local primary school teachers in Indigenous communities,” says Hoffecker. “They created something that addresses a very immediate need in the community — lack of training.
Both of Hoffecker’s undergraduate collaborators are writing theses inspired by these case studies.
“My time with Elizabeth allowed me to learn how to conduct research from scratch, ask for help, find solutions, and trust myself,” says Sarmiento García. She finds the ASPIRE approach profoundly appealing. “It is not only ethical, but also deeply committed to applying results to the real lives of the people involved.”
“This experience has been incredibly positive, validating my own ability to generate knowledge through research, rather than relying only on established authors to back up my arguments,” says Camila del Cid, a fifth-year anthropology student. “This was empowering, especially as a Latin American researcher, because it emphasized that my perspective and contributions are important.”
Hoffecker says this pilot run with UVG undergrads produced “high-quality research that can inform evidence-based decision-making on development issues of top regional priority” — a key goal for ASPIRE. Hoffecker plans to “develop a pathway that other UVG students can follow to conduct similar research.”
MIT undergraduate research will continue. “Our students’ activities have been very valuable in Guatemala, so much so that the snow pea, chitosan, and essential oils teams would like to continue working with our students this year,” says Leith. She anticipates a new round of MIT UROPs for next summer.
Youssef, for one, is eager to get to work on refining the snow pea cart. “I like the idea of working outside my comfort zone, thinking about things that seem unsolvable and coming up with a solution to fix some aspect of the problem,” she says.
Hope flags when medications fail, isolating and endangering patients. Backed by a major grant, 2 Harvard scientists are focused on reducing the distance between diagnosis and recovery.
For millions of people every year, depression is not just an illness but a grueling pattern — anguish, meds, failure, repeat.
Supported by a major grant, two Harvard scientists want to break that pattern, each by his own path.
David Walt is operating at a microscopic level, observing cell abnormalities that may contribute to depression. Diego Pizzagalli is taking a bigger-picture approach, using MRIs and other methods to identify potential treatments by tracking activity in key brain regions. Their common aim, backed by the nonprofit Wellcome Leap, is to speed the path from diagnosis to an effective medication for the individual patient.
“We’re concerned that when people go through this trial-and-error approach, they lose hope,” Pizzagalli said. “We’re really interested in evaluating whether by using tools of neuroscience, we can get to the correct treatment faster.”
More than 22 million U.S. adults suffer at least one major depressive episode every year. The experience is lonely, debilitating, and dangerous. As anxiety, insomnia, and other symptoms take hold, patients lose touch with family and friends. Feelings of isolation interrupt one of the greatest sources of happiness and well-being — relationships — and heighten their risk of suicide. The damage also creeps into broader society, including U.S. workplaces, imposing an economic burden of more than $330 billion annually.
The first attempt at antidepressant therapy takes 12-14 weeks to be effective, and works for only about a third of patients.
Research shows varying success with subsequent treatments, with as little as 40 percent of patients able to find a drug that works for them by the fourth try.
Talk therapy can help, and emerging technologies, including neurostimulation, have shown promise. But one of the most common treatments for depression — antidepressants such as selective serotonin reuptake inhibitors, or SSRIs, which are often prescribed by a primary care physician — has yielded mixed results, in part because of a grindingly inexact matching process.
“You end up with lots of people who, frankly, need a personalized individualized analysis to figure out the underlying basis of their disease that can be addressed by a particular drug,” Walt said. “It’s just sort of guesswork right now and there is no strong scientific basis for what’s right for each person. It’s, ‘Let’s try this drug and see if it works.’”
The goal of both researchers is to help shape an approach that is more effective for being more precise. “We’ve wanted to convince ourselves — and the field, hopefully — that personalized treatment is possible in depression,” Pizzagalli said. “Persistent symptoms can be very impairing and failed antidepressant treatments are associated with costs to individuals and society, with loss of productivity.”
Written in the blood
Walt, a professor of pathology and the Hansjörg Wyss Professor of Bioinspired Engineering at Harvard Medical School, wants to know whether certain proteins at work in the brain can shed light on how depression develops, allowing scientists to identify potential treatments. He and his team are studying four major cell types, each serving a different function with unique protein molecules.
“The expectation is that the proteins that we capture and measure from these four different cells will be different in individuals who suffer from major depression compared to those from healthy individuals,” Walt said.
One target is neurons, which transmit messages among brain regions. Changes in neurotransmitters such as serotonin can contribute to depression. (SSRIs function by increasing serotonin levels in the brain.) The other areas of focus are oligodendrocytes, microglia, and astrocytes, which influence cell structure, immune response, and metabolic function, respectively.
Any abnormality in these cells can weaken the connections in the brain and leave a person more vulnerable to mood disorders. Past research has suggested that antidepressants can help our brains repair and form new connections among damaged cells.
If investigators can determine which cell types are being affected in patients with depression, then eventually they should be able to target the underlying mechanism responsible for those changes, Walt said. If it’s a neurotransmission problem, then specialists can focus on finding drugs, including SSRIs, that best target neuron growth and regulation. If the issue is related to immune cells, researchers can try to identify drugs that affect the immune system.
Walt has zeroed in on extracellular vesicles — pieces of cells that travel out of the brain and into our blood.
“Parts of the cell membrane that encapsulate the cell break off from the cell into these really tiny nanoparticles,” he said. “These nanoparticles contain all the contents of the cell from which they have broken off. And these nanoparticles can get through the blood-brain barrier into the bloodstream to some extent.”
By comparing blood — which contains less than 1 percent material from the brain — with spinal fluid, he and his team have been able to identify specific markers in these different cell types that allow them to isolate extracellular vesicles in the blood.
The goal he has in mind would be life-changing.
“If you could identify the right markers in blood, then you could give a drug to somebody, and then have them come back the next week, take their blood, and measure biomarkers to determine if the drug is working,” he said.
“You may say, ‘This isn’t working, because your markers are exactly where they were last week before you started taking the drug. We need to switch you to a new drug immediately.’ Our goal is to avoid having patients wait six months to see if a drug works. If we can, it means we’re making progress toward helping these patients find the right treatment, compress the timeframe, and reduce the risk of suicide.”
Watching for a breakthrough
Pizzagalli, a psychiatry professor at the Medical School and director of the Center for Depression, Anxiety and Stress Research at McLean Hospital, has spent his career examining psychological, environmental, and neurobiological factors associated with mood disorders, including major depression.
For the Wellcome Leap project, his lab is probing behavior and brain function for markers that could be used to assess the severity of a patient’s depression and guide treatment choices.
The work builds on a previous study that deployed neurocognitive tests, EEG, and functional MRI to pinpoint biomarkers that could predict a positive response to widely prescribed drugs: the atypical antidepressant bupropion, whose brand name is Wellbutrin, or the SSRI sertaline, whose brand name is Zoloft. That research led the team to imaging methods that in both cases reliably predicted a favorable response. The working premise now is that an MRI might be able to determine whether an SSRI or other medication is the best avenue of treatment.
“Our hope is that individuals with the bupropion markers will do very well when receiving bupropion, and vice versa for the patients with the sertraline markers,” said Pizzagalli, whose team will also weigh personal attributes (age, race, sex, etc.), personality traits, and performance in neuropsychological tests.
The functional MRI is recorded with patients in a resting state. Researchers track activated brain regions.
“Brain regions become activated with anything that we do: Thoughts, emotions, motivation, and so on,” said Pizzagalli, adding: “It’s not the case that every single brain region is activating in isolation and not communicating with the other brain regions. Information is basically passed from region to region.”
The use of fMRIs has shown that the anterior cingulate cortex (above) and the nucleus accumbens (below) signal to each other in a network associated with reward-sensitivity and learning.
The two regions his team is most interested in are part of the so-called brain reward system. The nucleus accumbens is very deep in the brain and known for its role in pleasure and motivation; the rostral anterior cingulate cortex is located in the frontal lobe and a key intersection of cognition and emotion.
Pizzagalli is exploring the strength of the link between the two regions, which could help a prescriber decide between an SSRI and a non-SSRI.
“What we’re doing is moving from looking at the level of brain activity in a single region to activity across a network,” he said.
Both Walt and Pizzagalli pointed out that personalized treatment of depression and other brain disorders has been a challenge for reasons that go far beyond the capabilities of any one lab. Cost is a major obstacle, as is the deep individual complexity of the illness. But clarifying that such personalized treatment is possible and worth pursuing would be a major turning point, both for clinicians and their patients.
“The task is on researchers just to show whether these types of approaches can actually dramatically improve the response rate,” Pizzagalli said.
The journey to an answer is in its very early stages, both he and Walt were quick to note. Pizzagalli’s lab expects to finish work on their project in mid-to-late 2025. The first phase of Walt’s initiative is set to wrap this month, but the larger plan will unfold over years.
In the end, the researchers hope to have made major progress toward reclaiming time for patients and families suffering under the sometimes crushing weight of depression.
“It might be a blood test, it might be a blood test combined with imaging, it could be a blood test combined with imaging combined with certain behavioral features,” Walt said. “It could be that all of these tools, or a combination, will be necessary to really do precision diagnostics and be able to identify the right drug for the right person at the right time.”
If you or someone you know is struggling with a mental health issue, the National Institute of Mental Health has resources that can help. In a crisis, use the 988 Suicide and Crisis Lifeline. On campus, help is available through Counseling and Mental Health Services. There is also a 24/7 support line: 617-495-2042.
Writing in the journal Science Robotics, the research team, led by the University of Cambridge, outline how ‘palaeo-inspired robotics’ could provide a valuable experimental approach to studying how the pectoral and pelvic fins of ancient fish evolved to support weight on land.
“Since fossil evidence is limited, we have an incomplete picture of how ancient life made the transition to land,” said lead author Dr Michael Ishida from Cambridge’s Department of Engineering. “Palaeontologists examine ancient fossils for clues about the structure of hip and pelvic joints, but there are limits to what we can learn from fossils alone. That’s where robots can come in, helping us fill gaps in the research, particularly when studying major shifts in how vertebrates moved.”
Ishida is a member of Cambridge’s Bio-Inspired Robotics Laboratory, led by Professor Fumiya Iida. The team is developing energy-efficient robots for a variety of applications, which take their inspiration from the efficient ways that animals and humans move.
With funding from the Human Frontier Science Program, the team is developing palaeo-inspired robots, in part by taking their inspiration from modern-day ‘walking fish’ such as mudskippers, and from fossils of extinct fish. “In the lab, we can’t make a living fish walk differently, and we certainly can’t get a fossil to move, so we’re using robots to simulate their anatomy and behaviour,” said Ishida.
The team is creating robotic analogues of ancient fish skeletons, complete with mechanical joints that mimic muscles and ligaments. Once complete, the team will perform experiments on these robots to determine how these ancient creatures might have moved.
“We want to know things like how much energy different walking patterns would have required, or which movements were most efficient,” said Ishida. “This data can help confirm or challenge existing theories about how these early animals evolved.”
One of the biggest challenges in this field is the lack of comprehensive fossil records. Many of the ancient species from this period in Earth’s history are known only from partial skeletons, making it difficult to reconstruct their full range of movement.
“In some cases, we’re just guessing how certain bones connected or functioned,” said Ishida. “That’s why robots are so useful—they help us confirm these guesses and provide new evidence to support or rebut them.”
While robots are commonly used to study movement in living animals, very few research groups are using them to study extinct species. “There are only a few groups doing this kind of work,” said Ishida. “But we think it’s a natural fit – robots can provide insights into ancient animals that we simply can’t get from fossils or modern species alone.”
The team hopes that their work will encourage other researchers to explore the potential of robotics to study the biomechanics of long-extinct animals. “We’re trying to close the loop between fossil evidence and real-world mechanics,” said Ishida. “Computer models are obviously incredibly important in this area of research, but since robots are interacting with the real world, they can help us test theories about how these creatures moved, and maybe even why they moved the way they did.”
The team is currently in the early stages of building their palaeo-robots, but they hope to have some results within the next year. The researchers say they hope their robot models will not only deepen understanding of evolutionary biology, but could also open up new avenues of collaboration between engineers and researchers in other fields.
The research was supported by the Human Frontier Science Program. Fumiya Iida is a Fellow of Corpus Christi College, Cambridge. Michael Ishida a Postdoctoral Research Associate at Gonville and Caius College, Cambridge.
The transition from water to land is one of the most significant events in the history of life on Earth. Now, a team of roboticists, palaeontologists and biologists is using robots to study how the ancestors of modern land animals transitioned from swimming to walking, about 390 million years ago.
A commitment to the liberal arts is at the core of Princeton University's mission. A new cohort of outstanding postdocs has joined the Society of Fellows for three years of teaching and research.
Kennedy School panel says it’s a combination of knowledge — and skills
Christina Pazzanese
Harvard Staff Writer
4 min read
Surveys have shown that Americans’ faith in the nation’s political system and its institutions has declined, as partisan polarization and civic disengagement has risen. In response, there is a growing consensus that students — and American democracy itself — would greatly benefit from more robust civic education.
Higher education, in particular, has long had an essential role in helping foster a healthy democracy, but are colleges and universities doing enough to meet the moment?
A panel of specialists on public policy and civic engagement considered the many options during a talk last week hosted by the Ash Center for Democratic Governance and Innovation at the Harvard Kennedy School and the Democratic Knowledge Project at Harvard Graduate School of Education.
During the current period of “incredible turmoil and debate” on college campuses across the country, the question over how higher ed should fulfill that role has become especially salient and important, said Danielle Allen, James Bryant Conant University Professor, during the Friday event.
Allen is also director of the Allen Lab for Democracy Renovation, which focuses on the challenge of how to tune up democratic institutions to ensure they continue operating as intended and to fix components when they falter.
“Cultivating the spirit not only of individual economic success, but of collective work mutually tied to one another, is the secret sauce our nation needs right now more than ever.”
John Bridgeland ’82
Experts say if we truly want to prepare college students to be fully engaged in civic life after graduation, schools need to insist that appropriate training be part of an undergraduate education. That would include necessary academic work as well as the development of the intellectual and social tools necessary to engage in the kind of vigorous, fact-based debate that living in a healthy democracy requires.
It’s “absolutely essential” that every college student take courses in American government, politics, history, and economics regardless of their concentration or career goals, said John Bridgeland ’82, who chairs Civic Enterprises, a social enterprise firm, and worked on domestic policy initiatives during the George W. Bush and Barack Obama administrations.
Schools must also expand how students understand democracy — that it’s a cooperative activity all citizens are part of and responsible for maintaining, Bridgeland argued. And they could do more to encourage and facilitate stints in public service, perhaps by linking it to tuition defrayment.
“I think cultivating the spirit not only of individual economic success, but of collective work mutually tied to one another, is the secret sauce our nation needs right now more than ever,” he said.
Allen noted that one of the biggest obstacles schools must overcome to provide “a democracy education” is “ideological self-segregation and an absence of civil discourse across political divides.”
Panelists said there are some strategies that can help.
Schools not only ought to be deliberate in teaching the history and requisite skills of democracy, but also model the behaviors of a democratic society, such as supporting arguments with facts, listening to and considering evidence from other viewpoints, and having sometimes-uncomfortable conversations, said Cecilia Muñoz, a senior White House staffer on domestic policy during the Obama administration.
Beyond dialogue, schools can also bring students from different viewpoints to work together on local, real-world problems so they learn a cornerstone of democracy — how to move beyond ideological differences in order to get things done, said Bridgeland.
Changing attitudes, curricula, and behavior certainly won’t be easy. Still, lots of college and university officials remain eager to see their institutions play a more intentional role “renovating” American democracy and educating its citizenry, said Rajiv Vinnakota, president of Institute for Citizens and Scholars, an organization assisting more than 100 college presidents pushing for better civic preparedness.
Contrary to news reports about ideological clashes on college campuses over the Middle East or presidential politics that might suggest students today are less open to engaging with those from different perspectives, survey data show “that actually isn’t the case,” Vinnakota said.
One major structural challenge that school administrators will need to overcome, however, is that students are often afraid to speak their minds freely because of a fear over “being canceled” by peers and on social media. It’s a “critical” societal problem with no clear solution, he warned.
‘Harvard Thinking’: Plastics are everywhere, even in our bodies
We ingest equivalent of credit card per week — how worried should we be? In podcast, experts discuss how to minimize exposure, possible solutions.
Samantha Laine Perfas
Harvard Staff Writer
long read
The world has a plastic problem. Not only are nonbiodegradable plastics clogging oceans and landfills, but they’re also invading our bodies.
“Ingestion is the primary route of exposure, and we are consuming about 5 grams of micronanoplastics per week; that’s the equivalent of a credit card,” said Philip Demokritou, the founding director of the Environmental Health Nanoscience Laboratory and the Center for Nanotechnology and Nanotoxicology at the School of Public Health.
We’re “drowning” in plastic exposure, according to Don Ingber, the founding director of the Wyss Institute and a professor both in the Medical School and School of Engineering. From the synthetic clothes we wear to wildfire smoke, it’s nearly impossible to escape. And our bodies can’t fully break plastics down. This is especially alarming as research has found plastic in nearly every bodily organ.
“These particles … are what we call sustained release vehicles, meaning they’re just sitting there, and every day they’re releasing a little bit for the rest of the lifetime of those cells in your gut or other organs,” Ingber said. “That makes [them] even more dangerous.”
Mary Johnson, a research scientist in the Environmental Health Department of the School of Public Health, said more research is needed to figure out who is at the highest risk of exposure. But all consumers should be trying to minimize their use of plastics — from what they wear to how they furnish their homes to how they prepare food — until better, biodegradable alternatives can be found.
“As a consumer I feel like I can’t wait; I want to minimize my own exposure,” Johnson said.
In this episode, host Samantha Laine Perfas speaks with Demokritou, Ingber, and Johnson about the prevalence of plastic — and what to do about it.
Transcript
Philip Demokritou: Ingestion is the primary route of exposure, and we are consuming about 5 grams of micronanoplastics per week; that’s the equivalent of a credit card.
Samantha Laine Perfas: Our planet is filled with plastic. On average, we produce 430 million tons every year, most of which is used only for a short period of time and then discarded. But plastic isn’t just in the environment: it’s now in our bodies. Microplastics have been found in our bloodstreams, lungs, and other organs, and we’re only recently beginning to understand how this affects our health.
How destructive is our relationship to plastic?
Welcome to “Harvard Thinking,” a podcast where the life of the mind meets everyday life. Today, I’m joined by:
Demokritou: Philip Demokritou. I’m the founding director of the Harvard Nanotechnology and Nanotoxicology Center at the School of Public Health.
Laine Perfas: His current research focuses on how nanomaterials and particles affect our health and safety. Then:
Don Ingber: Don Ingber. I’m the founding director of the Wyss Institute and a professor both in the Medical School and School of Engineering, as well as Boston Children’s Hospital.
Laine Perfas: He’s also a cell biologist and a pioneer in the field of bionics. And finally:
Mary Johnson: Mary Johnson. I am a research scientist in the Environmental Health Department of the School of Public Health.
Laine Perfas: Her work focuses on immune health and investigates how the effects of air pollution, wildfires, and other environmental exposures affect our bodies.
And I’m Samantha Laine Perfas, your host and a writer for the Harvard Gazette. Today, we’ll take a close look at the plastic problem.
I want to start off by asking why are we so obsessed with plastic?
Demokritou: I think plastics have become an integral part of modern life because of their low cost, amazing properties, and they definitely made our life easier. We found one of the very first ads that the industry put together and it starts with, “Plastic is fantastic.” So, I guess we live in the age of disposable living. We are all addicted to plastics.
Ingber: It’s basically our culture’s heroin. It does everything you’d want it to do at incredibly low cost with high fidelity, reproducibility, manufacturability. It’s useful in every way, as Philip was saying. It’s a fantastic product, it’s just getting rid of it is the problem. It’s killing us over time; I mean the whole planet, not just humans. I mean, marine life, animals. In the last 15, 20 years, it’s really reached the point where it’s obvious when you see large expanses of ocean filled with plastic trash, that doesn’t degrade in landfills and so forth, but it was always there. We just weren’t aware.
Demokritou: When we don’t manage plastics in a sustainable way, then it becomes a problem. And that’s the case for every chemical we’re using, every material we’re using. And unfortunately, the modus operandi of our society when it comes to chemicals is, let’s put them out there and we’ll worry and clean the mess 20, 30 years later when scientists like myself, Don, and Mary discover that they are causing harm to the environment and human beings.
Laine Perfas: So I want to talk a little bit about microplastics and nanoplastics specifically. What are they and why are they so concerning to our health?
Johnson: So the plastics are wonderful when they’re being used, but then they decompose. And as they decompose, the microplastics are small plastics, less than 5 millimeters, and they can further degrade into nanoplastics, which are even smaller. And the concern, obviously, is not only is it destroying our environment, a few years ago we found out it’s in our blood, and now we’re finding out it’s in a lot of our organs and tissues and really it’s everywhere. I think that has heightened our concern and our awareness that we really have to figure out a way to prevent this degradation and exposing ourselves to the microplastics.
Laine Perfas: How is it getting into our bodies?
Ingber: I was in meetings 15, 20 years ago when the field of nanotoxicology sort of initiated when people realized that these particles are getting — this was all types of nanoparticles, not just plastics — getting into every organ in the body, and it was hard to understand how this could be happening because we have barriers in our tissues. Maybe 14 years ago we developed devices called organs-on-chips that have human cells oriented in little engineered devices that have hollow channels that could have flow of fluids to mimic blood and air. And we lined it by the lung cells that line our air sac. And we put particles that were some of the more plastic particles in the lung. And what we found is that actually breathing motions increase their absorption enormously, and it’s analogous, I think, to like viruses going across, which are similar size, going across the barrier of these tissues. They’re being picked up the way we pick up other things that are small and transport them. It wasn’t like tearing apart the tissue at all. It really made me think, whoa, I mean, the idea that this can cross all the tissue barriers got really worrisome to me.
Laine Perfas: I mentioned at the top of the episode too that we’re also literally eating them; and I guess that was surprising to me, that from consuming them it was also entering our bodies because like you said, Don, I would have thought that our bodies were able to just digest it. And there it goes. And it’s no longer in us. But that’s not the case. So I’d love to hear more about that as well.
Demokritou: That’s what I call the two I’s in terms of the exposure: inhalation, of course, but also ingestion. Actually, if you look at the human population data, you will see that ingestion is the primary route of exposure, and we are consuming about 5 grams of micronanoplastics per week; that’s the equivalent of a credit card. So ingestion is a major route. And we have a grant from USDA, and we’re looking also at how these micronanoplastics from soil make it to edible plants. And also through the trophic transfer, they can make it through the food chain. Now, I think Don put it nicely, anything in the nanoscale, it’s very clear that they can bypass biological barriers. When it comes to micronanoplastics, especially nanoplastics, they are everywhere. It’s the byproduct of degradation over 50, 60 years that we throw them out there. We have many evidence that these nanoplastics in particular, because of their hydrophobic nature, they can really become systemic. We found them everywhere. Every organ, every week, it’s a study that we found them. In this organ, in that organ. Of course, for those of us that were doing toxicology, we know that the dose makes the poison. So it’s not just the identification in organs, it’s also in certain quantities that they can really cause harm. And that’s the question we’re trying to address right now.
Johnson: I’d like to also bring up microplastics in the air, I think [that] is underappreciated, and we don’t have a standardized method for measuring it, especially on a populational level. And we do research looking at how wildfires, how the smoke impacts your immune system. We know that there’s also microplastics being inhaled with the smoke. But the standardization of measurements isn’t there yet to be able to accurately quantify how much we are inhaling, especially in those special circumstances with increased air pollution or wildfire smoke.
Ingber: I learned that one of the biggest sources of microplastics is tires; as tires run on the road and you’ll see those little black marks, it’s leaching into the air. And then also textiles. We’re just surrounded, we’re bathed in them. The other point I think that’s important, it may not be obvious, is that, when we ingest foods, we digest them, right? We break them down to small molecules that could be absorbed, and those that are not digested usually go out in feces or urine. But plastics are not broken down to their individual links, if you like, we call them monomers. Yes, their small bits are released through breakdown, but that’s more physical breakdown over time and not chemical breakdown. And that’s what makes them really so dangerous.
Laine Perfas: It seems like a study every day is coming out, oh, we found microplastics here in the body, or in this organ, or here. So we know that it’s very commonly present in our bodies. What do we know so far about the health risks, about how that actually is affecting our bodies, and some of the dangers that it can cause?
Demokritou: In public health, usually we use epidemiology to come up with the associations of exposures to whatever disease. And that’s an area that when it comes to micronanoplastics, we don’t have many studies out there. So we need to do more of these kind of studies to link the associations between exposures and diseases. Now, in terms of the toxicology of micronanoplastics, that’s a little bit more mature field. I’ve been studying nanoscale materials and plastics for probably 10 years, especially nanoplastics through the NIH-funded center we had at Harvard. The evidence that nanoscale plastics in particular, that they can bypass biological barriers, I think, that’s very strong. You put them in the lungs, they will translocate, become systemic, they will go to different organs. Also, at the cellular molecular level, we see red flags. We can see them becoming internalized in the gut, for instance, we even found them in the nuclei. We publish a ton of papers on DNA damage, the potential to generate reactive exospecies and interfere with cellular functions. Actually, I’m using one of the organ-on-a-chip platforms that Don developed to understand how they behave in the gut. We have a ton of evidence, but we need to understand mechanistically what’s happening. Not all micronanoplastics are created equal. They have unique properties, different polymers, different sizes, morphologies. So we have a ton of work to do to study potential health effects. We are not there yet.
Ingber: I’m excited to hear you working with intestine-on-a-chip because that would be a great model for this. We’ve also integrated microbiome into these intestine chips. And the microbiome can also modify the plastics, or they can be modified by the plastic. And these plastics can bring toxins along with them, like heavy metals. And that’s a whole area I think that people have explored more in the marine-life area, but it’s probably affecting us as well. In our first paper that I mentioned, where we looked at nanoparticle transport in the lung chip, we could absolutely show activation of inflammation. And inflammation is, you know, at the heart of almost every disease and also even cancer progression. It was the nanoparticles being taken up that drove that.
Johnson: There was a recent study that did come out that I thought was pretty exciting, where they looked at patients who are undergoing carotid endarterectomies, so they were scraping the plaque out of the arteries, and then they analyzed the plaque for microplastics. And they found that those who had microplastics in the plaque, I believe it was at least 50 percent, I think it was more than 50 percent, they were able to associate the microplastic levels with morbidity and mortality three years later. And to my knowledge, it’s one of the first studies that were able to show basically a clinical outcome associated with the presence of the microplastics.
Demokritou: One additional point, Sam. All the plastics that we’re currently using, they are loaded with additives. And those additives have plenty of literature, historical, epidemiological, and toxicological data that they can cause harm. We know the phthalates, that they’re there to make the plastic soft, they’re endocrine-disrupting chemicals. And these micronanoplastics now, when they’re taken up by our cells and the body, they are the carriers of these additives so they can more efficiently deliver chemicals into our body. So, it’s kind of a Trojan horse of delivering chemicals from the plastics themselves. We have a paper now in review that as these micronanoplastics wandering in the environment, they carry other environmental pollutants on their surface because of their hydrophobic nature. And also they can really deliver these environmental pollutants more effectively in our bodies.
Ingber: It’s like the cumulative exposure to chemicals of any type that matters, not just whether you saw it for a short time. And these particles, when they’re ingested, are what we call sustained-release vehicles, meaning they’re just sitting there and every day they’re releasing a little bit for the rest of the lifetime of those cells in your gut or other organs, and so that makes it even more dangerous.
Laine Perfas: That actually gets to a clarifying question I wanted to ask, which is: Is it the plastic itself that’s dangerous, or is it all the additives and chemicals that are in the plastic?
Demokritou: I think it’s a combination of the two. I mean, you cannot rule one or the other. And that is one of the major knowledge gaps that we have in toxicological studies of micronanoplastics. Actually, in my labs right now, we have developed platforms that enable us to simulate what happens to a plastic material across its life cycle as it goes through these stressors, which can be mechanical, it can be weathering, UV photo oxidation, thermal stressors. We can shorten what happens to plastic material over 50 years and we make what we call reference micronanoplastic materials that are environmentally relevant. And those are the ones we use in our toxicological studies.
Laine Perfas: Mary, I actually wanted to ask you, with microplastics, I saw some of your work was actually looking at different communities who are at higher risk of being affected than others. Could you talk about that a little bit?
Johnson: A lot of our research has looked at communities that are disadvantaged and are typically exposed to high levels of chronic air pollution and/or wildfire smoke. We don’t have hard data yet, but within that context, it is thought that those groups would also be more exposed to microplastics. A similar concept would be those who are living next to industries, and those also tend to be the disadvantaged populations, and so those types of vulnerable populations are probably going to fall fairly similar to what we see for those exposed to chronic air pollution and exposure to wildfire smoke. There have been a limited amount of studies, not our own, that have found that, at least in the indoor environment, infants are more at risk for exposure, and the second category would be preschoolers, and as you go up, you become less and less exposed to the microplastics. As we’re able to monitor indoor or outdoor air for actual microplastic numbers, we’ll have a better idea of the different age ranges and vulnerabilities.
Ingber: Do they think that infants and small children have greater exposure because everything that you give to a kid is plastic because it tends to be cheap and safer and they put everything in their mouths? Is that why or something about absorb from air?
Johnson: I do believe that the younger populations are exposed to a lot more plastics, but I believe that particular study was focusing on indoor dust, which is where the microplastics were primarily found.
Laine Perfas: I guess I’m surprised. I would have thought that people who are older, who’ve been on earth longer, would be more at risk.
Johnson: Yeah, I believe it was, they were referring to the inhalation basically of the dust in the indoor home, which makes sense, and the younger they are and they’re on the floors and not washing their hands and crawling and closer to the dust itself in the home. But it’s a very limited amount of research that’s come out so far, looking at those associations. So I do think much more needs to be done looking at which populations are truly vulnerable and we should be targeting to try to prevent exposure as much as possible.
Demokritou: I think it might be worth [discussing] a little bit more what happens on the global scale. There is a very recent paper, came out in Nature, which is the first effort to quantify and come up with an inventory of emissions of plastics around the world. And in high-income countries, generally speaking, we did well, not amazingly well, but we did well in containing and controlling the plastic pollution. That’s not the case in low-income countries. About 50, 60 million tons of plastic waste out of these 250 million metric tons that we’re generating globally, it’s uncontained. And in these countries, you will see open fires and burning plastic. You will see debris everywhere and the populations in those countries are getting exposed at higher levels compared to all of us in the United States, for instance. So we really need to do a little bit more at a global scale because environmental pollutants, they transcend boundaries. So if you put micronanoplastics in the air, they will travel, especially the nanoscale particles. They can go thousands of miles, and they can be everywhere. We need to keep that in mind as we address this global issue.
Johnson: I’d like to also bring up, there was a study that sampled tap water globally and 80 percent of the samples had microplastics in them. It’s certainly another issue that maybe isn’t talked about as much, although there was a study looking at bottled water and levels of microplastics, and it was pretty shocking how high of a concentration that were in the bottles. So many areas, I think that, need to be addressed.
Ingber: It obviously depends on whether you have copper pipes, or nowadays PVC is used all over and actually plastic tubing is used now quite a bit. It probably varies enormously, but it’s hard to escape.
Sam, I think it’s important to note that plastics is a general term that are, you know, materials that can be easily formed and take shapes that you desire and that’s where its initial term came from. In medicine, there are plastics that are biodegradable. That’s very different. Those can be broken down to the individual monomers or links. And so we’re talking about ones that can’t be broken down here, often petroleum industry-based and so forth. There’s a lot of work going on at my institute and other places in terms of both bioplastics, things that are easily even compostable, you can put it in the compost to break it down, or ways to remediate and break down plastics that are all over the place. I think that is really where hope lies.
Laine Perfas: I actually wanted to ask a question about that. How are we doing when it comes to discovering plastic alternatives?
Demokritou: We really need to substitute the non-biodegradable plastics, especially the ones that are single-use. Myself and Kit Parker from Wyss Institute, we have this project trying to extract biopolymers from food waste and then turn them into potential nanofibers to replace food packaging, which is a major source of plastic. And actually most of the food packaging we’re using, it’s single-use. So it will end up in landfills. We developed and we published a paper in Nature, I think a year or two ago, that we developed the first water-soluble, washable plastic material that can be used as an alternative for food packaging.
Ingber: Maybe 10 years ago, we developed a material that was inspired by insect cuticle, right? Think of a lobster, you know, or a beetle, very hard shell, but they’re also flexible. And it turns out that it’s all the same material that’s almost like plywood made up of layers; it’s called chitin in insects. And so we made something we called shrilk, which was the chitin, or breakdown products called chitosan, from shrimp shells, and silk fibroin, from silk. Chitosan is used in medical products for wound healing and silk is in surgical sutures, so it’s safe. And when we recreated the layer-by-layer structure, we actually had material that was optically clear but had the strength of aluminum foil and it could be molded. And so we were really excited about that. The challenges in that world, and I think for even Philip’s technology, is scaling up manufacturing so that you can do this at a cost-effective way.
The other side of this that we’ve had some really exciting recent breakthroughs is breaking down of plastics. And there have been some groups recently published that they can find microbes, bacteria that can degrade one type of plastic. And that’s gotten a huge excitement. We have bacteria that we isolated from the microbiome of a worm that degrades plastic on its own. And it degrades at least four different types of the major classes of plastic, and that’s led to a startup company; that was between my lab and George Church’s at the Wyss Institute. The reason we’re excited about this new startup is that it can work in a complex mixture. It doesn’t need to go through the current recycling pipeline of isolating each bit and then trying to degrade one at a time. And I think that’s the kind of thing that we need.
Laine Perfas: Given what we know and don’t know, are there things that we can do as consumers to reduce our exposure now, while we wait for some of these other changes to happen?
Demokritou: I think we should start from the societal level. We really need to actually come up with a strategy, and this is what we call the “three R waste hierarchy.” So we need to reduce use of plastics. We need to reuse plastics as much as we can. Of course, recycling has to increase, where 9 percent in the United States, it’s very low, and the single-use plastics that will end up in the environment, we need to substitute them, if possible with biodegradable, nontoxic plastics. Of course, we need to get all stakeholders on board. We need to redesign products, which may add to the cost, and it’s also the question of who is paying this add-on cost? Is it the consumer? Those are fundamental questions that we really need to start discussing. And the most effective approach is to do a source reduction. If we reduce the use, if all of us reduce the use of plastics, myself included, I think that can be a really good start.
Ingber: Think about how quickly solar and wind have changed in terms of energy. It required huge political shifts and financial incentives. And I think it’s got to be at that level. I mean, sure, every individual could stop buying plastic water bottles and using plastic bags and use wood cutting boards, but it’s got to be top-down at the same time.
Demokritou: There is this effort by United Nations to put in place a legally binding global plastics treaty, similar to the climate treaty, which is gaining track actually the last couple years; there are already I think 150 countries, signed this treaty. And again, this is at the global level because we need to see the plastic crisis across the board, not only at the local level, and not only at the high-income countries.
Johnson: I guess I would say, obviously, yeah, everything has to be dealt with at a global level, but even so, as a consumer I feel like I can’t wait, I want to minimize my own exposure. And I think having the mindset of when you purchase something, actually knowing what you’re buying helps. Simple examples would be clothing, synthetic clothing versus buying an all-cotton product or an area rug. Synthetic rugs are really cheap and soft. And wool rugs cost more, but you would be reducing the potential amount of microplastics that you’re being exposed to. You mentioned the kitchen. I do try to have nothing plastic used in the kitchen, if possible, it’s not always possible, and avoid, if there is plastic, it being exposed to heat, which can make the chemicals leach out faster. And I think it would be helpful to consumers to have some type of labeling on products, especially until we get these biodegradable plastics, so people are more educated and can make better choices in trying to minimize their exposure to the plastics.
Laine Perfas: So all of these are good suggestions for minimizing future exposure on a global and personal scale. Is there anything that we know of that we can do to remove the plastic that’s already in our bodies?
Ingber: Yeah. I think that’s something we need to figure out.
Demokritou: Another important element, Sam, is, we need better monitoring. Plastics, micronutrient plastics are not in the list of the chemicals that we’re monitoring in terms of biomonitoring. I know some states like California, they’re trying to include micronanoplastics in their plans. And also the reporting is very important. But we need to develop the methods to be able to do it efficiently, the identification and quantification, because it’s not just the identification. It’s not that I found one microplastic in my bottle of water. So it’s the dose makes the poison. We need to quantify our exposures at the human population level.
Laine Perfas: So if I could give each of you a magic wand that you could wave to either speed up the research on something that’s already happening, or to just solve an aspect of this problem, what do you think are the things that would make the biggest difference on this issue right now?
Ingber: You heard Philip say that we want to reduce, replace, reuse. What I’m seeing out there, and I get to see technologies that are really out there coming down the pipeline at the Institute, I really do think this idea of harnessing the way some organisms can break down plastics. And so it is possible. And so what we have to do is not only find ones that break it down, but link it into a cycle so that you have a full remediation, reuse, replacement cycle. And do it in a cost-effective way. It has to be cost-effective, or it will never get anywhere.
Johnson: I guess for my wish, I think having a scalable method to accurately measure the microplastics, whether it’s air or water or in tissue, would be really advantageous, so we can begin to better understand the exposures obviously, but the health impacts.
Demokritou: I think, definitely we need technologies to clean the mess we created, but we really need to start thinking of how we can reduce the use of plastics, because we can’t just throw toxic compounds out there and then develop the technologies to clean the mess.
Laine Perfas: Thank you all for joining me for this conversation. I learned a lot and I really appreciate it.
Ingber: Thank you.
Demokritou: Thank you.
Laine Perfas: Thanks for listening; to find links to all of our episodes and a transcript of this one, visit harvard.edu/thinking. This episode was hosted and produced by me, Samantha Laine Perfas. It was edited by Ryan Mulcahy, Simona Covel, and Paul Makishima with additional production and editing support from Sarah Lamodi. Original music and sound designed by Noel Flatt. This podcast was produced by Harvard University, copyright 2024.
Many black holes detected to date appear to be part of a pair. These binary systems comprise a black hole and a secondary object — such as a star, a much denser neutron star, or another black hole — that spiral around each other, drawn together by the black hole’s gravity to form a tight orbital pair.
Now a surprising discovery is expanding the picture of black holes, the objects they can host, and the way they form.
In a study appearing today in Nature, physicists at MIT and Caltech report that they have observed a “black hole triple” for the first time. The new system holds a central black hole in the act of consuming a small star that’s spiraling in very close to the black hole, every 6.5 days — a configuration similar to most binary systems. But surprisingly, a second star appears to also be circling the black hole, though at a much greater distance. The physicists estimate this far-off companion is orbiting the black hole every 70,000 years.
That the black hole seems to have a gravitational hold on an object so far away is raising questions about the origins of the black hole itself. Black holes are thought to form from the violent explosion of a dying star — a process known as a supernova, by which a star releases a huge amount of energy and light in a final burst before collapsing into an invisible black hole.
The team’s discovery, however, suggests that if the newly-observed black hole resulted from a typical supernova, the energy it would have released before it collapsed would have kicked away any loosely bound objects in its outskirts. The second, outer star, then, shouldn’t still be hanging around.
Instead, the team suspects the black hole formed through a more gentle process of “direct collapse,” in which a star simply caves in on itself, forming a black hole without a last dramatic flash. Such a gentle origin would hardly disturb any loosely bound, faraway objects.
Because the new triple system includes a very far-off star, this suggests the system’s black hole was born through a gentler, direct collapse. And while astronomers have observed more violent supernovae for centuries, the team says the new triple system could be the first evidence of a black hole that formed from this more gentle process.
“We think most black holes form from violent explosions of stars, but this discovery helps call that into question,” says study author Kevin Burdge, a Pappalardo Fellow in the MIT Department of Physics. “This system is super exciting for black hole evolution, and it also raises questions of whether there are more triples out there.”
The study’s co-authors at MIT are Erin Kara, Claude Canizares, Deepto Chakrabarty, Anna Frebel, Sarah Millholland, Saul Rappaport, Rob Simcoe, and Andrew Vanderburg, along with Kareem El-Badry at Caltech.
Tandem motion
The discovery of the black hole triple came about almost by chance. The physicists found it while looking through Aladin Lite, a repository of astronomical observations, aggregated from telescopes in space and all around the world. Astronomers can use the online tool to search for images of the same part of the sky, taken by different telescopes that are tuned to various wavelengths of energy and light.
The team had been looking within the Milky Way galaxy for signs of new black holes. Out of curiosity, Burdge reviewed an image of V404 Cygni — a black hole about 8,000 light years from Earth that was one of the very first objects ever to be confirmed as a black hole, in 1992. Since then, V404 Cygni has become one of the most well-studied black holes, and has been documented in over 1,300 scientific papers. However, none of those studies reported what Burdge and his colleagues observed.
As he looked at optical images of V404 Cygni, Burdge saw what appeared to be two blobs of light, surprisingly close to each other. The first blob was what others determined to be the black hole and an inner, closely orbiting star. The star is so close that it is shedding some of its material onto the black hole, and giving off the light that Burdge could see. The second blob of light, however, was something that scientists did not investigate closely, until now. That second light, Burdge determined, was most likely coming from a very far-off star.
“The fact that we can see two separate stars over this much distance actually means that the stars have to be really very far apart,” says Burdge, who calculated that the outer star is 3,500 astronomical units (AU) away from the black hole (1 AU is the distance between the Earth and sun). In other words, the outer star is 3,500 times father away from the black hole as the Earth is from the sun. This is also equal to 100 times the distance between Pluto and the sun.
The question that then came to mind was whether the outer star was linked to the black hole and its inner star. To answer this, the researchers looked to Gaia, a satellite that has precisely tracked the motions of all the stars in the galaxy since 2014. The team analyzed the motions of the inner and outer stars over the last 10 years of Gaia data and found that the stars moved exactly in tandem, compared to other neighboring stars. They calculated that the odds of this kind of tandem motion are about one in 10 million.
“It’s almost certainly not a coincidence or accident,” Burdge says. “We’re seeing two stars that are following each other because they’re attached by this weak string of gravity. So this has to be a triple system.”
Pulling strings
How, then, could the system have formed? If the black hole arose from a typical supernova, the violent explosion would have kicked away the outer star long ago.
“Imagine you’re pulling a kite, and instead of a strong string, you’re pulling with a spider web,” Burdge says. “If you tugged too hard, the web would break and you’d lose the kite. Gravity is like this barely bound string that’s really weak, and if you do anything dramatic to the inner binary, you’re going to lose the outer star.”
To really test this idea, however, Burdge carried out simulations to see how such a triple system could have evolved and retained the outer star.
At the start of each simulation, he introduced three stars (the third being the black hole, before it became a black hole). He then ran tens of thousands of simulations, each one with a slightly different scenario for how the third star could have become a black hole, and subsequently affected the motions of the other two stars. For instance, he simulated a supernova, varying the amount and direction of energy that it gave off. He also simulated scenarios of direct collapse, in which the third star simply caved in on itself to form a black hole, without giving off any energy.
“The vast majority of simulations show that the easiest way to make this triple work is through direct collapse,” Burdge says.
In addition to giving clues to the black hole’s origins, the outer star has also revealed the system’s age. The physicists observed that the outer star happens to be in the process of becoming a red giant — a phase that occurs at the end of a star’s life. Based on this stellar transition, the team determined that the outer star is about 4 billion years old. Given that neighboring stars are born around the same time, the team concludes that the black hole triple is also 4 billion years old.
“We’ve never been able to do this before for an old black hole,” Burdge says. “Now we know V404 Cygni is part of a triple, it could have formed from direct collapse, and it formed about 4 billion years ago, thanks to this discovery.”
This work was supported, in part, by the National Science Foundation.
Within the human brain, movement is influenced by a brain region called the striatum, which sends instructions to motor neurons in the brain. Those instructions are conveyed by two pathways, one that initiates movement (“go”) and one that suppresses it (“no-go”).
In a new study, MIT researchers have discovered an additional two pathways that arise in the striatum and appear to modulate the effects of the go and no-go pathways. These newly discovered pathways connect to dopamine-producing neurons in the brain — one stimulates dopamine release and the other inhibits it.
By controlling the amount of dopamine in the brain via clusters of neurons known as striosomes, these pathways appear to modify the instructions given by the go and no-go pathways. They may be especially involved in influencing decisions that have a strong emotional component, the researchers say.
“Among all the regions of the striatum, the striosomes alone turned out to be able to project to the dopamine-containing neurons, which we think has something to do with motivation, mood, and controlling movement,” says Ann Graybiel, an MIT Institute Professor, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the new study.
Graybiel has spent much of her career studying the striatum, a structure located deep within the brain that is involved in learning and decision-making, as well as control of movement.
Within the striatum, neurons are arranged in a labyrinth-like structure that includes striosomes, which Graybiel discovered in the 1970s. The classical go and no-go pathways arise from neurons that surround the striosomes, which are known collectively as the matrix. The matrix cells that give rise to these pathways receive input from sensory processing regions such as the visual cortex and auditory cortex. Then, they send go or no-go commands to neurons in the motor cortex.
However, the function of the striosomes, which are not part of those pathways, remained unknown. For many years, researchers in Graybiel’s lab have been trying to solve that mystery.
Their previous work revealed that striosomes receive much of their input from parts of the brain that process emotion. Within striosomes, there are two major types of neurons, classified as D1 and D2. In a 2015 study, Graybiel found that one of these cell types, D1, sends input to the substantia nigra, which is the brain’s major dopamine-producing center.
It took much longer to trace the output of the other set, D2 neurons. In the new Current Biology study, the researchers discovered that those neurons also eventually project to the substantia nigra, but first they connect to a set of neurons in the globus palladus, which inhibits dopamine output. This pathway, an indirect connection to the substantia nigra, reduces the brain’s dopamine output and inhibits movement.
The researchers also confirmed their earlier finding that the pathway arising from D1 striosomes connects directly to the substantia nigra, stimulating dopamine release and initiating movement.
“In the striosomes, we’ve found what is probably a mimic of the classical go/no-go pathways,” Graybiel says. “They’re like classic motor go/no-go pathways, but they don’t go to the motor output neurons of the basal ganglia. Instead, they go to the dopamine cells, which are so important to movement and motivation.”
Emotional decisions
The findings suggest that the classical model of how the striatum controls movement needs to be modified to include the role of these newly identified pathways. The researchers now hope to test their hypothesis that input related to motivation and emotion, which enters the striosomes from the cortex and the limbic system, influences dopamine levels in a way that can encourage or discourage action.
That dopamine release may be especially relevant for actions that induce anxiety or stress. In their 2015 study, Graybiel’s lab found that striosomes play a key role in making decisions that provoke high levels of anxiety; in particular, those that are high risk but may also have a big payoff.
“Ann Graybiel and colleagues have earlier found that the striosome is concerned with inhibiting dopamine neurons. Now they show unexpectedly that another type of striosomal neuron exerts the opposite effect and can signal reward. The striosomes can thus both up- or down-regulate dopamine activity, a very important discovery. Clearly, the regulation of dopamine activity is critical in our everyday life with regard to both movements and mood, to which the striosomes contribute,” says Sten Grillner, a professor of neuroscience at the Karolinska Institute in Sweden, who was not involved in the research.
Another possibility the researchers plan to explore is whether striosomes and matrix cells are arranged in modules that affect motor control of specific parts of the body.
“The next step is trying to isolate some of these modules, and by simultaneously working with cells that belong to the same module, whether they are in the matrix or striosomes, try to pinpoint how the striosomes modulate the underlying function of each of these modules,” Lazaridis says.
They also hope to explore how the striosomal circuits, which project to the same region of the brain that is ravaged by Parkinson’s disease, may influence that disorder.
The research was funded by the National Institutes of Health, the Saks-Kavanaugh Foundation, the William N. and Bernice E. Bumpus Foundation, Jim and Joan Schattinger, the Hock E. Tan and K. Lisa Yang Center for Autism Research, Robert Buxton, the Simons Foundation, the CHDI Foundation, and an Ellen Schapiro and Gerald Axelbaum Investigator BBRF Young Investigator Grant.
A new system can accurately assess the chromosomal status of in vitro-fertilized embryos using only time-lapse video images of the embryos and maternal age, according to a study from investigators at Weill Cornell Medicine.
Investigators from Weill Cornell Medicine have discovered a defense mechanism that protects skin cancer cells from oxidative stress and helps them spread.
Within the human brain, movement is coordinated by a brain region called the striatum, which sends instructions to motor neurons in the brain. Those instructions are conveyed by two pathways, one that initiates movement (“go”) and one that suppresses it (“no-go”).
In a new study, MIT researchers have discovered an additional two pathways that arise in the striatum and appear to modulate the effects of the go and no-go pathways. These newly discovered pathways connect to dopamine-producing neurons in the brain — one stimulates dopamine release and the other inhibits it.
By controlling the amount of dopamine in the brain via clusters of neurons known as striosomes, these pathways appear to modify the instructions given by the go and no-go pathways. They may be especially involved in influencing decisions that have a strong emotional component, the researchers say.
“Among all the regions of the striatum, the striosomes alone turned out to be able to project to the dopamine-containing neurons, which we think has something to do with motivation, mood, and controlling movement,” says Ann Graybiel, an MIT Institute Professor, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the new study.
Graybiel has spent much of her career studying the striatum, a structure located deep within the brain that is involved in learning and decision-making, as well as control of movement.
Within the striatum, neurons are arranged in a labyrinth-like structure that includes striosomes, which Graybiel discovered in the 1970s. The classical go and no-go pathways arise from neurons that surround the striosomes, which are known collectively as the matrix. The matrix cells that give rise to these pathways receive input from sensory processing regions such as the visual cortex and auditory cortex. Then, they send go or no-go commands to neurons in the motor cortex.
However, the function of the striosomes, which are not part of those pathways, remained unknown. For many years, researchers in Graybiel’s lab have been trying to solve that mystery.
Their previous work revealed that striosomes receive much of their input from parts of the brain that process emotion. Within striosomes, there are two major types of neurons, classified as D1 and D2. In a 2015 study, Graybiel found that one of these cell types, D1, sends input to the substantia nigra, which is the brain’s major dopamine-producing center.
It took much longer to trace the output of the other set, D2 neurons. In the new Current Biology study, the researchers discovered that those neurons also eventually project to the substantia nigra, but first they connect to a set of neurons in the globus palladus, which inhibits dopamine output. This pathway, an indirect connection to the substantia nigra, reduces the brain’s dopamine output and inhibits movement.
The researchers also confirmed their earlier finding that the pathway arising from D1 striosomes connects directly to the substantia nigra, stimulating dopamine release and initiating movement.
“In the striosomes, we’ve found what is probably a mimic of the classical go/no-go pathways,” Graybiel says. “They’re like classic motor go/no-go pathways, but they don’t go to the motor output neurons of the basal ganglia. Instead, they go to the dopamine cells, which are so important to movement and motivation.”
Emotional decisions
The findings suggest that the classical model of how the striatum controls movement needs to be modified to include the role of these newly identified pathways. The researchers now hope to test their hypothesis that input related to motivation and emotion, which enters the striosomes from the cortex and the limbic system, influences dopamine levels in a way that can encourage or discourage action.
That dopamine release may be especially relevant for actions that induce anxiety or stress. In their 2015 study, Graybiel’s lab found that striosomes play a key role in making decisions that provoke high levels of anxiety; in particular, those that are high risk but may also have a big payoff.
“Ann Graybiel and colleagues have earlier found that the striosome is concerned with inhibiting dopamine neurons. Now they show unexpectedly that another type of striosomal neuron exerts the opposite effect and can signal reward. The striosomes can thus both up- or down-regulate dopamine activity, a very important discovery. Clearly, the regulation of dopamine activity is critical in our everyday life with regard to both movements and mood, to which the striosomes contribute,” says Sten Grillner, a professor of neuroscience at the Karolinska Institute in Sweden, who was not involved in the research.
Another possibility the researchers plan to explore is whether striosomes and matrix cells are arranged in modules that affect motor control of specific parts of the body.
“The next step is trying to isolate some of these modules, and by simultaneously working with cells that belong to the same module, whether they are in the matrix or striosomes, try to pinpoint how the striosomes modulate the underlying function of each of these modules,” Lazaridis says.
They also hope to explore how the striosomal circuits, which project to the same region of the brain that is ravaged by Parkinson’s disease, may influence that disorder.
The research was funded by the National Institutes of Health, the Saks-Kavanaugh Foundation, the William N. and Bernice E. Bumpus Foundation, Jim and Joan Schattinger, the Hock E. Tan and K. Lisa Yang Center for Autism Research, Robert Buxton, the Simons Foundation, the CHDI Foundation, and an Ellen Schapiro and Gerald Axelbaum Investigator BBRF Young Investigator Grant.
Angela Odoms-Young is the critical issue lead for extension programming in the areas of human nutrition, food safety and security and obesity prevention, effective October 1, 2024. The appointment reflects CCE's dedication to leveraging campus resources and CCE educators and collaborators across the state, to ensure that needs are met and key metrics and benchmarks for educational work are identified.
Images of coastal houses being carried off into the sea due to eroding coastlines and powerful storm surges are becoming more commonplace as climate change brings a rising sea level coupled with more powerful storms. In the U.S. alone, coastal storms caused $165 billion in losses in 2022.
Now, a study from MIT shows that protecting and enhancing salt marshes in front of protective seawalls can significantly help protect some coastlines, at a cost that makes this approach reasonable to implement.
The new findings are being reported in the journal Communications Earth and Environment, in a paper by MIT graduate student Ernie I. H. Lee and professor of civil and environmental engineering Heidi Nepf. This study, Nepf says, shows that restoring coastal marshes “is not just something that would be nice to do, but it’s actually economically justifiable.” The researchers found that, among other things, the wave-attenuating effects of salt marsh mean that the seawall behind it can be built significantly lower, reducing construction cost while still providing as much protection from storms.
“One of the other exciting things that the study really brings to light,” Nepf says, “is that you don’t need a huge marsh to get a good effect. It could be a relatively short marsh, just tens of meters wide, that can give you benefit.” That makes her hopeful, Nepf says, that this information might be applied in places where planners may have thought saving a smaller marsh was not worth the expense. “We show that it can make enough of a difference to be financially viable,” she says.
While other studies have previously shown the benefits of natural marshes in attenuating damaging storms, Lee says that such studies “mainly focus on landscapes that have a wide marsh on the order of hundreds of meters. But we want to show that it also applies in urban settings where not as much marsh land is available, especially since in these places existing gray infrastructure (seawalls) tends to already be in place.”
The study was based on computer modeling of waves propagating over different shore profiles, using the morphology of various salt marsh plants — the height and stiffness of the plants, and their spatial density — rather than an empirical drag coefficient. “It’s a physically based model of plant-wave interaction, which allowed us to look at the influence of plant species and changes in morphology across seasons,” without having to go out and calibrate the vegetation drag coefficient with field measurements for each different condition, Nepf says.
The researchers based their benefit-cost analysis on a simple metric: To protect a certain length of shoreline, how much could the height of a given seawall be reduced if it were accompanied by a given amount of marsh? Other ways of assessing the value, such as including the value of real estate that might be damaged by a given amount of flooding, “vary a lot depending on how you value the assets if a flood happens,” Lee says. “We use a more concrete value to quantify the benefits of salt marshes, which is the equivalent height of seawall you would need to deliver the same protection value.”
They used models of a variety of plants, reflecting differences in height and the stiffness across different seasons. They found a twofold variation in the various plants’ effectiveness in attenuating waves, but all provided a useful benefit.
To demonstrate the details in a real-world example and help to validate the simulations, Nepf and Lee studied local salt marshes in Salem, Massachusetts, where projects are already underway to try to restore marshes that had been degraded. Including the specific example provided a template for others, Nepf says. In Salem, their model showed that a healthy salt marsh could offset the need for an additional seawall height of 1.7 meters (about 5.5 feet), based on satisfying a rate of wave overtopping that was set for the safety of pedestrians.
However, the real-world data needed to model a marsh, including maps of salt marsh species, plant height, and shoots per bed area, are “very labor-intensive” to put together, Nepf says. Lee is now developing a method to use drone imaging and machine learning to facilitate this mapmaking. Nepf says this will enable researchers or planners to evaluate a given area of marshland and say, “How much is this marsh worth in terms of its ability to reduce flooding?”
The White House Office of Information and Regulatory Affairs recently released guidance for assessing the value of ecosystem services in planning of federal projects, Nepf explains. “But in many scenarios, it lacks specific methods for quantifying value, and this study is meeting that need,” she says.
The Federal Emergency Management Agency also has a benefit-cost analysis (BCA) toolkit, Lee notes. “They have guidelines on how to quantify each of the environmental services, and one of the novelties of this paper is quantifying the cost and the protection value of marshes. This is one of the applications that policymakers can consider on how to quantify the environmental service values of marshes,” he says.
The software that environmental engineers can apply to specific sites has been made available online for free on GitHub. “It’s a one-dimensional model accessible by a standard consulting firm,” Nepf says.
“This paper presents a practical tool for translating the wave attenuation capabilities of marshes into economic values, which could assist decision-makers in the adaptation of marshes for nature-based coastal defense,” says Xiaoxia Zhang, an assistant professor at Shenzhen University in China who was not involved in this work. “The results indicate that salt marshes are not only environmentally beneficial but also cost-effective.”
The study “is a very important and crucial step to quantifying the protective value of marshes,” adds Bas Borsje, an associate professor of nature-based flood protection at the University of Twente in the Netherlands, who was not associated with this work. “The most important step missing at the moment is how to translate our findings to the decision makers. This is the first time I’m aware of that decision-makers are quantitatively informed on the protection value of salt marshes.”
Lee received support for this work from the Schoettler Scholarship Fund, administered by the MIT Department of Civil and Environmental Engineering.
Marine monitoring is the bedrock of ensuring the health of our oceans and marine life, as it enables the collection of data to understand the biogeochemical processes that drive coastal and ocean systems. There is growing recognition of the importance of marine environment monitoring at the regional scale in safeguarding our shared waters. Data collected could help guide the development of legislation and strategies aimed at protecting our oceans and marine ecosystems.
Advancing marine science research and education
To advance collaboration in marine science research, the Plymouth Marine Laboratory (PML) and the NUS Tropical Marine Science Institute (TMSI) inked a Memorandum of Understanding (MoU) in September 2024, which outlines a framework for collaboration and sharing of knowledge in marine and climate science research and education between the two institutions. As part of the MoU, PML will collaborate with NUS TMSI, as well as the St John's Island National Marine Laboratory (SJINML) hosted by TMSI, for various marine science research projects.
To kickstart this partnership, SJINML and the Marine Environment Sensing Network (MESN), with support from the British High Commission Singapore and the Conservation Artists Collective, organised the “Marine Monitoring for Action: Safeguarding our Shared Seas through Marine Environment Sensing and Data” workshop which was held in Singapore from 7 to 11 October 2024. The partnership was formally announced at the workshop.
This partnership builds on the long-standing collaboration between Singapore and the UK dating back to the 1950s, which Mr Nikesh Mehta, the British High Commissioner for Singapore, reflected during his welcome address at the workshop. Mr Mehta explained that continuing this historical collaboration is significant in emphasising the importance of global collaboration in marine science research and education.
“As part of the UK and Singapore’s strategic partnership, we are committed to strengthening even further our science and technology partnership, to go further than we did by developing capabilities to address global issues and challenges,” he added.
The workshop drew more than 65 participants from the region and beyond, including Malaysia, Indonesia, Thailand, Philippines, Vietnam, and the UK, all working towards a common goal of fostering collaboration and innovation in marine monitoring to protect our oceans and marine ecosystems for the benefit of future generations.
Professor Yaacob Ibrahim, Chairman of the SJINML Governing Board, in his opening remarks, also highlighted the importance of establishing global partnerships. “Marine issues cannot therefore be managed by one country to safeguard our shared seas. Close collaboration, sharing of knowledge, expertise and data are essential,” he said.
The Marine Monitoring for Action workshop is endorsed by the UNESCO International Oceanographic Commission as a UN Ocean Decade workshop.
A science-based approach for future oil spill response
Mr Desmond Lee, Minister for National Development and Minister-in-charge of Social Services Integration, delivered the opening remarks on the second day of the workshop. He emphasised the need for data in shaping policy to protect marine environments. “To ensure the sustainability of our marine biodiversity and its ecosystem, it is crucial that we adopt a science-based approach to monitor and protect our regional waters. Only with science and data can we make a strong case for conservation,” he said.
Mr Lee also announced a 15-month national monitoring plan, as a response to a recent oil spill incident in Singapore that took place in June 2024, to collect baseline data and monitor the impact and recovery of marine habitats after the oil spill. The research team will comprise experts from NParks, NUS TMSI, SJINML and the National Institute of Education.
Dr Tan Koh Siang, Principal Research Fellow at NUS TMSI, who is part of the research team said, “We are interested to see what effects this oil spill has on [organism] communities that are not visually obvious.” He added that the team hopes that their research can provide science-based support in finding ways to respond to future oil spills.
Assisting in the marine monitoring efforts is the MESN buoy, a system that enables real-time monitoring of seawater quality to strengthen climate change and ecological research. The MESN buoy houses a resident suite of sensors and modules for round-the-clock marine monitoring, and aims to collect data of over 30 parameters through near real-time sensing and monthly cruises.
Dr Jani Tanzil, Senior Research Fellow at NUS TMSI and Facility Director of SJINML, added that there are plans to deploy two more MESN buoys, one in the Johor Strait off Pulau Ubin and another in the south Singapore Strait, off Raffles Lighthouse (Pulau Satumu). This will help to expand marine monitoring capabilities and capture the quality of water flushing into Singapore.
The science essential for the ocean we envision
With a schedule packed with talks from marine scientists and hands-on opportunities, the workshop outlined how effective marine monitoring at various levels can help shape marine management and policies, as well as drive the implementation of national, regional, and international legislations and initiatives to protect the marine environment.
Professor Matthew Frost, Head of International Office at PML, highlighted during his presentation that global policies and actions to protect the ocean and marine life “would only work with data and scientific information feeding into it.”
The workshop also provided a platform for participants from across the region to take part in practical sessions to develop skills and acquire the tools needed to obtain rigorous and scientifically credible marine data. During the workshop, the participants actively engaged in discussions about current practices and challenges in marine monitoring, and explored opportunities for collaborative impact-led research for Southeast Asian regional seas.
“The Marine Monitoring for Action workshop was a great opportunity for regional and international stakeholders to harmonise and understand the good practices of marine monitoring in Southeast Asia,” said Dr Wee Hin Boo, Senior Lecturer from Universiti Kebangsaan Malaysia, who attended the workshop.
Participants were also given a tour of the SJINML facility on St John’s Island, which is located approximately 6.5km to the south of the main island of Singapore. During the visit, the MESN team, which include researchers from both NUS and NTU, introduced the different techniques used in marine environment sensing.
“My biggest takeaway from the workshop is learning about the theory of change and the process of developing actionable solutions starting from the impact that we want to make,” said Ms Denise Yu, Research Assistant at NUS TMSI.
The Marine Monitoring for Action workshop helped to foster an environment of mutual learning and exchange of ideas between countries, fuelling future possibilities of regional collaborative research efforts to help protect our shared oceans.
“Armed with the data and knowledge from marine monitoring, we can better position ourselves to be more resilient to the challenges ahead, especially with the uncertainty of climate change and other environmental disturbances from increasing coastal urbanisation,” said Dr Tanzil.
By Prof Tan Huay Cheem, Senior Consultant from the National University Heart Centre, Singapore and Dept of Medicine, Yong Loo Lin School of Medicine at NUS
Steering and navigating manual wheelchairs on pavements costs wheelchair users a lot of energy and places a strain on their joints. Two ETH employees have discovered a brilliant and simple solution that they are now developing further to bring to market.
NUS deepened its historic and longstanding relationship with Universiti Malaya (UM) this month, organising a series of events to celebrate the universities’ academic and social connections.
On 15 October, the inaugural UM-NUS Joint Workshop on Biomedical Engineering 2024 was held on the NUS Kent Ridge campus, bringing together researchers, educators, and scholars to discuss advancements in biomedical engineering and technology. Organised by the NUS Institute for Health Innovation & Technology (iHealthTech) and supported by the NUS Office of Alumni Relations, the event included keynote addresses and talks from 10 UM and NUS researchers, as well as a tour of iHealthTech. Pertinent topics were discussed, including, among others, the impact of the fourth industrial revolution on healthcare and the opportunities that artificial intelligence and the metaverse bring to the field.
“We were excited we could hold this joint event, bringing together leading minds from our two sister universities,” said NUS Professor Lim Chwee Teck, NUS Society Chair Professor of Biomedical Engineering and Director of iHealthTech. “Workshops like these serve as an important platform for exchanging ideas and exploring potential areas of collaboration.”
Professor Dr Yvonne Lim, Associate Deputy Vice-Chancellor (Academic and International) at UM, echoed the sentiment. “This partnership highlights our commitment to advancing innovation that will drive transformative solutions in healthcare technology,” she said. “Together, we aim to inspire new ideas and pave the way for innovation that will benefit both Malaysia and Singapore."
Speaking at the UM-NUS Gala Dinner at The Fullerton Hotel Singapore later that evening, NUS President Professor Tan Eng Chye detailed the legacy of cooperation and camaraderie between the two universities. Recent partnerships include the 2023 UM-NUS Joint Symposium on Infectious Diseases and Translational Program, as well as the 2024 expansion of the NUS Overseas Colleges programme to Kuala Lumpur, with UM serving as NUS’ partner university. These examples “demonstrate the shared success of UM and NUS,” said Professor Tan. “By continuing to create and encourage opportunities for collaboration and exchange, UM and NUS are both enriched.”
The Gala Dinner was hosted by NUS Chancellor, His Excellency President Tharman Shanmugaratnam, and attended by UM Chancellor, His Royal Highness Sultan Nazrin Muizzuddin Shah, the Sultan of the state of Perak and Deputy Yang di-Pertuan Agong of Malaysia. “Events like this are what makes the bond of our historical friendship and ties ever more meaningful, and ever more robust, rooted in unyielding trust and camaraderie,” noted Vice-Chancellor of UM, Professor Dato' Seri Ir. Dr Noor Azuan bin Abu Osman.
Close to 200 NUS and UM alumni and staff joined the formal dinner, which included two stirring performances from the NUS Yong Siew Toh Conservatory of Music’s alumni string quartet.
A friendly face-off on the fairway
Running parallel to the Joint Workshop and Gala Dinner was the 53rd UM-NUS Inter-University Tunku Chancellor Golf Tournament, which NUS hosted on 15 and 16 October.Over 100 golfers from UM and NUS participated in the tournament, which was first held in 1968 in Kuala Lumpur.
Over the two days, faculty, staff, and alumni took the opportunity to reconnect with old friends and forge new connections while engaging in friendly competition. Intermittent rain did not dampen the spirits or sportsmanship of the golfers, who enjoyed the social game at Orchid Country Club, and the competitive game at Seletar Country Club. Led by Golf Captain Mr Bernard Toh, the NUS team emerged victorious for the first time since 2019, marking the close to another celebration of the productive partnership between the two universities.
Researchers push the limits of sound wave control, unlocking the potential for faster, clearer wireless communication and quantum information processing technologies.
English professor, journalist says first step to better prose is being aware that no one has to read you
Universities are repositories of fascinating ideas. So why is academic writing so boring? Leonard Cassuto thinks it’s all a matter of keeping in mind that good writing is about keeping the reader interested. (Hint: Be a better storyteller.)
Cassuto, A.M. ’85, Ph.D. ’89, a Fordham English professor and journalist, recently published a new book, “Academic Writing as if Readers Matter.” He said he got the idea for the project while teaching expository writing as a graduate student, helping writers with different backgrounds and interests hone their communication skills, particularly in academic writing.
Cassuto’s desire to help make academic writing more accessible and compelling has dovetailed in recent decades with his participation in ongoing discussions to rethink graduate education. The goal there is to focus more tightly on work that advances society, not just some arcane academic interest — along with being able to better explain that work to diverse audiences.
Cassuto recently spoke to the Gazette about his work. This interview has been edited for length and clarity.
You start your book by pointing out that all academic writers begin their careers writing for one person: their teacher. Why does that create problems?
This is the primal scene of academic writing: some student writing some paper for some teacher someplace. It happens again and again and is the process by which we are socialized into the community of academic writers.
The distinguishing feature of that primal scene is one that I think gets very little attention, namely that the reader (in this case the teacher) is being paid. You grow up as a writer where your audience is one who can never be bored or discouraged because they’re being paid to read to the end of it. You’re learning in some sense that the reader doesn’t matter that much and that they’re going to be with you no matter what.
This is inevitably the root of many potential bad habits, which can burst into flower as writers become more and more advanced. One of the core motivations of this book is to encourage writers to recognize this relationship and to try to eclipse it — to write as though the reader is not being paid. The results are going to be much better in every respect.
Academic writing has a bad reputation, particularly among readers who value good prose. Would you say that criticism is deserved?
Increasingly so, and this book serves as both a handbook and an advice book. That’s the book’s beating heart.
But another central objective is to understand academic writers as a community. When we are writing, we’re not just writing for ourselves. If we alienate our audience, individually and collectively, it’s part of a larger problem that we’re all creating.
Academia and higher education have to be a public good. In order for that to happen, we need to be able to communicate in a way that people are going to hear what we’re saying and receive the message. Whether these are scientists writing about how to make semiconductors or whether it’s a professor of politics who’s writing about how we should understand the geopolitical context of an event that’s happening someplace in the world, if academia is going to do its job and take care of the public, then the communication has to be intact. Otherwise it creates disdain for that project and skepticism about what emerges from the academy. It’s the business of the community, collectively.
The tips and the advice about how to become a better writer have to be a collective objective. We have to get better; we have to repair the relation between town and gown so that we can continue to take care of each other productively.
What are some of the most common pitfalls for academic writers?
Here are a few of the greatest hits: One is that too many academic writers don’t understand the importance of story. Stories are how human beings have been communicating with each other since before we could write. Human beings are storytelling animals. We are people who live by story, and every argument is a story, and every story is an argument. If academic writers are most accustomed to thinking of themselves as making arguments, then it needs to unfold in a narrative way. If it doesn’t, it’s not going to be as successful.
Then, academic writing is riddled with jargon. Jargon doesn’t have to be a bad thing; it can be an efficiency, where people are talking to other people who understand the same language. But jargon creates an in-group and an out-group, those who understand and those who do not. If your reader is not necessarily a member of that in-group, then you’re being — simply put — unfriendly. You’re saying, “I don’t really care about you.” That sort of relationship is not productive of collaboration, either in the present or in the future.
And third, too many academic writers don’t understand the necessary relationship between the abstract and the concrete. Without the concrete, the abstract is a bunch of airy-fairy ideas that are floating off in the distance. Every time the reader thinks they’ve got a grasp on them, it turns out they are wisps that drift away. But without the abstract, the concrete is just a pile of bricks, a bunch of facts that aren’t being tied together by anything. We need the abstract and the concrete to coexist. If a writer neglects this necessary connection, the chances are they’re not going to be able to be as persuasive.
You make the case that academic writing can actually be fun. How would making academic writing more fun prove a benefit to a scholar or a student?
I think that if writers remember they are people who are talking to other people, that’s the first step. There is a difference between good and bad scientific writing, and it isn’t just about questions of clarity; good science writing is also animated by sensibility. Sensibility can take many forms.
In this book I use a lot of lively metaphors. They’re designed not only to teach but to also to make the reader smile. There are different ways to communicate a sense of self or a sense of voice. The conventions of a particular discipline might dictate guidelines around that, but you can still have a voice.
Creative writing and academic writing are often seen as being at odds with one another. Can academic writing be creative?
“Creative writing” is a term of art that we use to talk about fiction, poetry, and drama, but if we think about what the words in the phrase actually mean, something that’s creative is original; it has vitality.
You can find a lot of fiction, poetry, and drama that isn’t creative by that definition, because it’s cliched, hackneyed, dead on the page. And inversely, you can find writing that is not fiction — such as academic writing — that exhibits all of these qualities that we attribute to the creative. There’s originality; there’s the vital spark, a sense of life on the page.
All writers should think of themselves as creative writers, that you’re going to do your best work if you try to create something where there wasn’t something before.
Some of these bad writing habits stem from academic anxiety or the fear the writer might be seen as “clodpoll,” as you so eloquently put it. How can writers work through that fear?
Academia as a culture promotes some bad habits of thought and being. Too many people in academia think it’s more important to show that you’re smart than it is to communicate with somebody. In fact, a writer, fearing being called “not smart,” is going to construct all kinds of defenses that inhibit understanding and communication. It tells their reader, “If you work like a sled dog, you might be able to understand it; unless you can’t, in which case, well, that’s your problem.”
I think too many academic readers have had the experience of pushing through academic writing that behaves that way. We’re not taught often enough that writing clearly and crisply is more apt to be seen as smart, more apt to gain respect — and also more likely to communicate learning.
But culture is very persistent. I understand how hard it can be to change culture, and this book is a gesture and a call for us to examine the culture in which a lot of academic writing is produced.
It’s worth noting that public awareness of A.I. — specifically, ChatGBT, which can be used for writing — burst on the scene as you were working on this book. How do you see A.I. helping or hurting academic writing?
A.I. is a tool. We have absorbed the impact of new technology before, and I think we’re going to do it again. A.I. raises legitimate concerns in many areas of our practice, but I think ultimately it will take its place in our tool kit, and we will do what we need to do in order to use it thoughtfully and productively.
I think the great fear that A.I. is going to replace us is overwrought because of the importance of sensibility. We’re still at the very beginning of learning how to use it, but I hope writers will ultimately benefit from having another tool in their kit, particularly if it can help them do their work faster.
What do you hope readers will take away from reading your book?
Writers need to communicate with a reader who is an actualperson. If we understand and appreciate that those people are out there wanting to learn from us as writers, then we can understand and anticipate their needs. We’ll produce better work. That work has a chance to be part of something bigger than us.
Academic writing is an enterprise, and each individual writer should be the best writer that they’re capable of being. We’re in this together, and we have to be able to understand that this community needs good writing in order for us to exist sustainably and productively with the larger society.
Nick Jewell, associate director of club sports, intramural sports, and sport camps for MIT’s Department of Athletics, Physical Education, and Recreation (DAPER) became a recreation professional because of the impact club sports (competitive, nonvarsity athletic teams) has made on his life. His participation in club sports has allowed him to find community anywhere he travels, whether domestically or abroad. In addition to creating an environment that provides education, inspires leadership, and promotes wellness, a pillar of DAPER is developing community, which makes Jewell’s professional and personal background an asset to the department.
After graduating from Clemson University with a master’s degree in education, student affairs for college athletics, Jewell moved to Boston. Five years ago, he began his career at MIT overseeing the front desk for DAPER. Moving up the ladder, Jewell now runs a variety of programming throughout the year. Much of his job is dedicated to the execution of MIT’s intramural and club teams.
Annually, MIT fields 20 to 25 intramural sport leagues, with the majority of them competing in the fall. Seasons last between six and eight weeks each semester, and teams are available for various skill levels. Current offerings include badminton, 3v3 basketball, and volleyball. MIT’s Club Sports Program complements the Institute’s intercollegiate athletic and intramural programs. MIT students, faculty, staff, alumni (and their spouses) are encouraged to join one of 34 club teams that range from alpine skiing to wrestling. Intramural sports are intended to be casual, while club sports require players to have a higher level of skill and commitment.
Jewell credits the success of club sports to the students who run them, and lends his supervision as needed. For example, if a club team wants to participate in a tournament in New York City, student officers ask Jewell to approve their participation. After Jewell signs off, the students reserve hotels and transportation, either through the Division of Student Life or by using their allowed budget (which Jewell manages) themselves. Clubs can also fundraise for their travel and have found that the most successful method is to host a tournament on campus. While these are also largely managed by students, Jewell serves as the liaison between the club officers and facility operations to reserve spaces and troubleshoot issues that may arise.
Jewell is also in charge of the MIT All Sports Summer Day Camp, which runs for seven weeks and offers a variety of athletic activities along with swim instruction. Each winter, he hires 50 part-time employees, including counselors, for camp. When camp registration opens, Jewell and his team input the information of 800 registered campers in their database in time for them to arrive on campus.
Always looking for innovative offerings for the community, Jewell recently attended the National Intramural-Recreational Sports Association (NIRSA) conference to learn what other university recreation departments are providing for their students. One takeaway was that arcade games are making a comeback. At the start of the pandemic, MIT students were engaging with each other by playing "Mario Kart" and other interactive video games, as it was easy to stay socially distant and compete while communicating over headsets. When students no longer needed to social distance, they continued to participate in competitive video games. With a squash court that was no longer in use, excitement from students, and newly raised funds, Jewell created MIT’s Esports Room. The room includes a PlayStation 5 and Nintendo Switch with four controllers for each, and a mini movie theater with a large projector and beanbag chairs for 15 people to sit. With the equipment in place and the space complete, Jewell’s next plan is to create e-sports tournaments.
Jewell’s pitch about intramural and club sports is simple: join one. When he speaks at orientation for new students, he tells parents about how the offerings from DAPER will enhance their child’s experience as a student — and beyond. Jewell and his colleagues want to ensure that when graduates have a career opportunity in a new city, or if they travel somewhere where they do not speak the language, they will be able to find community through sports.
Soundbytes
Q: What project at DAPER are you the proudest of?
Jewell: During the pandemic, I wanted to help students get outside and stay active. Because of this I created the “Simply Walk to Mordor Challenge” (from “Lord of the Rings”). Students made teams (fellowships) of up to six and added the steps they took each day into a spreadsheet. They could not only race characters Samwise Gamgee and Frodo Baggins, but they could also race other adventuring parties the distance from the Shire to Mount Doom. There was also a personal bar graph that showed students where they were in the book if they wanted to read along while they walked. It gained a lot of traction, and over 100 students participated. I was proud to get it off the ground and we got a lot of positive feedback from the students.
Q: What do you like the most about the MIT community?
Jewell: At MIT there is no such thing as a bad idea. Community members come to me with ideas that they know may not come to fruition, but that does not diminish their enthusiasm. For example, a student contacted me who wanted to start a varsity paddle ball team. I told him that starting a varsity team is tough, and we do not have any paddle ball courts. He suggested that we use one of our tennis courts to create a court for paddle ball. Eventually I had to tell him that it wasn’t going to work, but you don’t get creative, fun ideas without tossing everything against the wall and seeing what sticks. I love that students, staff, and faculty are creative enough to come up with ideas and ask, “What if we tried this?” Sometimes we can't, but when we can it’s magic.
Q: What advice would you give to a new staff member at MIT?
Jewell: Go to all of the meetings and activities that you can and interact with people outside of your department. There is a lot happening on campus that you can participate in and a lot of interesting people to meet. If a staff member wants to play flag football with undergraduates, we encourage that! Staff members can also get a membership to the DAPER gym, and we offer a lot of different athletic events and recreation opportunities for both mental and physical health.
Angela Odoms-Young is the critical issue lead for extension programming in the areas of human nutrition, food safety and security and obesity prevention, effective October 1, 2024. The appointment reflects CCE's dedication to leveraging campus resources and CCE educators and collaborators across the state, to ensure that needs are met and key metrics and benchmarks for educational work are identified.
The National Academy of Medicine recently announced the election of more than 90 members during its annual meeting, including MIT faculty members Matthew Vander Heiden and Fan Wang, along with five MIT alumni.
Election to the National Academy of Medicine (NAM) is considered one of the highest honors in the fields of health and medicine and recognizes individuals who have demonstrated outstanding professional achievement and commitment to service.
Matthew Vander Heiden is the director of the Koch Institute for Integrative Cancer Research at MIT, a Lester Wolfe Professor of Molecular Biology, and a member of the Broad Institute of MIT and Harvard. His research explores how cancer cells reprogram their metabolism to fuel tumor growth and has provided key insights into metabolic pathways that support cancer progression, with implications for developing new therapeutic strategies. The National Academy of Medicine recognized Vander Heiden for his contributions to “the development of approved therapies for cancer and anemia” and his role as a “thought leader in understanding metabolic phenotypes and their relations to disease pathogenesis.”
Vander Heiden earned his MD and PhD from the University of Chicago and completed his clinical training in internal medicine and medical oncology at the Brigham and Women’s Hospital and the Dana-Farber Cancer Institute. After postdoctoral research at Harvard Medical School, Vander Heiden joined the faculty of the MIT Department of Biology and the Koch Institute in 2010. He is also a practicing oncologist and instructor in medicine at Dana-Farber Cancer Institute and Harvard Medical School.
Fan Wang is a professor of brain and cognitive sciences, an investigator at the McGovern Institute, and director of the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics at MIT. Wang’s research focuses on the neural circuits governing the bidirectional interactions between the brain and body. She is specifically interested in the circuits that control the sensory and emotional aspects of pain and addiction, as well as the sensory and motor circuits that work together to execute behaviors such as eating, drinking, and moving. The National Academy of Medicine has recognized her body of work for “providing the foundational knowledge to develop new therapies to treat chronic pain and movement disorders.”
Before coming to MIT in 2021, Wang obtained her PhD from Columbia University and received her postdoctoral training at the University of California at San Francisco and Stanford University. She became a faculty member at Duke University in 2003 and was later appointed the Morris N. Broad Professor of Neurobiology. Wang is also a member of the American Academy of Arts and Sciences and she continues to make important contributions to the neural mechanisms underlying general anesthesia, pain perception, and movement control.
MIT alumni who were elected to the NAM for 2024 include:
Leemore Dafny PhD ’01 (Economics);
David Huang ’85 MS ’89 (Electrical Engineering and Computer Science) PhD ’93 Medical Engineering and Medical Physics);
Nola M. Hylton ’79 (Chemical Engineering);
Mark R. Prausnitz PhD ’94 (Chemical Engineering); and
Konstantina M. Stankovic ’92 (Biology and Physics) PhD ’98 (Speech and Hearing Bioscience and Technology)
Established originally as the Institute of Medicine in 1970 by the National Academy of Sciences, the National Academy of Medicine addresses critical issues in health, science, medicine, and related policy and inspires positive actions across sectors.
“This class of new members represents the most exceptional researchers and leaders in health and medicine, who have made significant breakthroughs, led the response to major public health challenges, and advanced health equity,” said National Academy of Medicine President Victor J. Dzau. “Their expertise will be necessary to supporting NAM’s work to address the pressing health and scientific challenges we face today.”
An Oxford team has taken second place in the United Kingdom and Ireland Programming Contest (UKIEPC), part of the International Collegiate Programming Contest series.
Professors Dr. Silvia Formenti and Dr. Massimo Loda have been elected to the National Academy of Medicine, in recognition of outstanding professional achievement and major contributions to the advancement of the medical sciences.
Nearly £800k in funding from the Advanced Research & Invention Agency (ARIA) will see the launch later this year of a raft of new research projects addressing AI safety.
The prize for the best invention at ETH Zurich 2024 will be awarded on 21 November. An overview, complete with videos, of the five technologies that made it to the final.
It can be hard to connect a certain amount of average global warming with one’s everyday experience, so researchers at MIT have devised a different approach to quantifying the direct impact of climate change. Instead of focusing on global averages, they came up with the concept of “outdoor days”: the number days per year in a given location when the temperature is not too hot or cold to enjoy normal outdoor activities, such as going for a walk, playing sports, working in the garden, or dining outdoors.
In a study published earlier this year, the researchers applied this method to compare the impact of global climate change on different countries around the world, showing that much of the global south would suffer major losses in the number of outdoor days, while some northern countries could see a slight increase. Now, they have applied the same approach to comparing the outcomes for different parts of the United States, dividing the country into nine climatic regions, and finding similar results: Some states, especially Florida and other parts of the Southeast, should see a significant drop in outdoor days, while some, especially in the Northwest, should see a slight increase.
The researchers also looked at correlations between economic activity, such as tourism trends, and changing climate conditions, and examined how numbers of outdoor days could result in significant social and economic impacts. Florida’s economy, for example, is highly dependent on tourism and on people moving there for its pleasant climate; a major drop in days when it is comfortable to spend time outdoors could make the state less of a draw.
“This is something very new in our attempt to understand impacts of climate change impact, in addition to the changing extremes,” Choi says. It allows people to see how these global changes may impact them on a very personal level, as opposed to focusing on global temperature changes or on extreme events such as powerful hurricanes or increased wildfires. “To the best of my knowledge, nobody else takes this same approach” in quantifying the local impacts of climate change, he says. “I hope that many others will parallel our approach to better understand how climate may affect our daily lives.”
The study looked at two different climate scenarios — one where maximum efforts are made to curb global emissions of greenhouse gases and one “worst case” scenario where little is done and global warming continues to accelerate. They used these two scenarios with every available global climate model, 32 in all, and the results were broadly consistent across all 32 models.
The reality may lie somewhere in between the two extremes that were modeled, Eltahir suggests. “I don’t think we’re going to act as aggressively” as the low-emissions scenarios suggest, he says, “and we may not be as careless” as the high-emissions scenario. “Maybe the reality will emerge in the middle, toward the end of the century,” he says.
The team looked at the difference in temperatures and other conditions over various ranges of decades. The data already showed some slight differences in outdoor days from the 1961-1990 period compared to 1991-2020. The researchers then compared these most recent 30 years with the last 30 years of this century, as projected by the models, and found much greater differences ahead for some regions. The strongest effects in the modeling were seen in the Southeastern states. “It seems like climate change is going to have a significant impact on the Southeast in terms of reducing the number of outdoor days,” Eltahir says, “with implications for the quality of life of the population, and also for the attractiveness of tourism and for people who want to retire there.”
He adds that “surprisingly, one of the regions that would benefit a little bit is the Northwest.” But the gain there is modest: an increase of about 14 percent in outdoor days projected for the last three decades of this century, compared to the period from 1976 to 2005. The Southwestern U.S., by comparison, faces an average loss of 23 percent of their outdoor days.
The study also digs into the relationship between climate and economic activity by looking at tourism trends from U.S. National Park Service visitation data, and how that aligned with differences in climate conditions. “Accounting for seasonal variations, we find a clear connection between the number of outdoor days and the number of tourist visits in the United States,” Choi says.
For much of the country, there will be little overall change in the total number of annual outdoor days, the study found, but the seasonal pattern of those days could change significantly. While most parts of the country now see the most outdoor days in summertime, that will shift as summers get hotter, and spring and fall will become the preferred seasons for outdoor activity.
In a way, Eltahir says, “what we are talking about that will happen in the future [for most of the country] is already happening in Florida.” There, he says, “the really enjoyable time of year is in the spring and fall, and summer is not the best time of year.”
People’s level of comfort with temperatures varies somewhat among individuals and among regions, so the researchers designed a tool, now freely available online, that allows people to set their own definitions of the lowest and highest temperatures they consider suitable for outdoor activities, and then see what the climate models predict would be the change in the number of outdoor days for their location, using their own standards of comfort. For their study, they used a widely accepted range of 10 degrees Celsius (50 degrees Fahrenheit) to 25 C (77 F), which is the “thermoneutral zone” in which the human body does not require either metabolic heat generation or evaporative cooling to maintain its core temperature — in other words, in that range there is generally no need to either shiver or sweat.
The model mainly focuses on temperature but also allows people to include humidity or precipitation in their definition of what constitutes a comfortable outdoor day. The model could be extended to incorporate other variables such as air quality, but the researchers say temperature tends to be the major determinant of comfort for most people.
Using their software tool, “If you disagree with how we define an outdoor day, you could define one for yourself, and then you’ll see what the impacts of that are on your number of outdoor days and their seasonality,” Eltahir says.
This work was inspired by the realization, he says, that “people’s understanding of climate change is based on the assumption that climate change is something that’s going to happen sometime in the future and going to happen to someone else. It’s not going to impact them directly. And I think that contributes to the fact that we are not doing enough.”
Instead, the concept of outdoor days “brings the concept of climate change home, brings it to personal everyday activities,” he says. “I hope that people will find that useful to bridge that gap, and provide a better understanding and appreciation of the problem. And hopefully that would help lead to sound policies that are based on science, regarding climate change.”
The research was based on work supported by the Community Jameel for Jameel Observatory CREWSnet and Abdul Latif Jameel Water and Food Systems Lab at MIT.
The emeritus Geraldine R. Segal Professor of American Social Thought reflected on the 60th anniversary of the Civil Rights Act of 1964 in conversation with Marcia Chatelain.
What happened when a meteorite the size of four Mount Everests hit Earth?
Anne J. Manning
Harvard Staff Writer
4 min read
Giant impact had silver lining for life, according to new study
Billions of years ago, long before anything resembling life as we know it existed, meteorites frequently pummeled the planet. One such space rock crashed down about 3.26 billion years ago, and even today, it’s revealing secrets about Earth’s past.
Nadja Drabon, an early Earth geologist and assistant professor in the Department of Earth and Planetary Sciences, has questions about what our planet was like during ancient eons rife with meteoritic bombardment, when only single-celled bacteria and archaea reigned — and when it all started to change. When did the first oceans appear? Continents? Plate tectonics? How did all of those violent impacts affect the evolution of life?
Her new study in Proceedings of the National Academy of Sciences attempts to answer some of these questions, in relation to the inauspiciously named “S2” meteoritic impact of more than 3 billion years ago, for which geological evidence is found in the Barberton Greenstone belt of South Africa. Through the painstaking work of collecting and examining rock samples centimeters apart and analyzing the sedimentology, geochemistry, and carbon isotope compositions they leave behind, Drabon’s team paints the most compelling picture to date of what happened the day a meteorite the size of four Mount Everests paid Earth a visit.
“Picture yourself standing off the coast of Cape Cod, in a shelf of shallow water. It’s a low-energy environment, without strong currents. Then all of a sudden, you have a giant tsunami sweeping by and ripping up the sea floor,” said Drabon.
Graphical depiction of the S2 impact and its immediate aftereffects.
The S2 meteorite, estimated to have been up to 200 times larger than the one that killed the dinosaurs, triggered a tsunami that mixed up the ocean and flushed debris from the land into coastal areas. Heat from the impact caused the topmost layer of the ocean to boil off, while also heating the atmosphere. A thick cloud of dust blanketed everything, shutting down any photosynthetic activity.
But bacteria are hardy, and following impact, according to the team’s analysis, bacterial life bounced back quickly. With this came sharp spikes in populations of unicellular organisms that feed off the elements phosphorus and iron. Iron was likely stirred up from the deep ocean into shallow waters by the aforementioned tsunami, and phosphorus was delivered to Earth by the meteorite itself and from an increase of weathering and erosion on land.
Drabon’s analysis shows that iron-metabolizing bacteria would thus have flourished in the immediate aftermath of the impact. This shift toward iron-favoring bacteria, however short-lived, is a key puzzle piece depicting early life on Earth. According to Drabon’s study, meteorite impact events — while reputed to kill everything in their wake (including, 66 million years ago, the dinosaurs) — carried a silver lining for life.
“But what this study is highlighting is that these impacts would have had benefits to life, especially early on, and these impacts might have actually allowed life to flourish.”
Nadja Drabon
“We think of impact events as being disastrous for life,” Drabon said. “But what this study is highlighting is that these impacts would have had benefits to life, especially early on, and these impacts might have actually allowed life to flourish.”
These results are drawn from the backbreaking work of geologists like Drabon and her students, hiking into mountain passes that contain the sedimentary evidence of early sprays of rock that embedded themselves into the ground and became preserved over time in the Earth’s crust. Chemical signatures hidden in thin layers of rock help Drabon and her students piece together evidence of tsunamis and other cataclysmic events.
Nadja Drabon.
Photo by Bryant Troung
Drabon with students David Madrigal Trejo and Öykü Mete during fieldwork in South Africa.
Photo courtesy of Nadja Drabon
The Barberton Greenstone Belt in South Africa, where Drabon concentrates most of her current work, contains evidence of at least eight impact events including the S2. She and her team plan to study the area further to probe even deeper into Earth and its meteorite-enabled history.
FAS creates new professorships in civil discourse and AI
5 min read
Gift from business leader Alfred Lin ’94 and artist Rebecca Lin ’94, part of record 30th reunion giving, builds on critical new efforts on dialogue and generative AI
Harvard University announced today two new professorships in civil discourse and one in artificial intelligence made possible by a gift from alums Alfred Lin ’94 and Rebecca Lin ’94. These professorships are part of a wider donation that will also support these critical areas of work within the Faculty of Arts and Sciences.
The gift comes as the University recently announced a newreport on open inquiry, with recommendations for faculty and students on how to debate and disagree in classrooms and within the larger campus community. Edgerley Family Dean Hopi Hoekstra last year launched a Civil Discourse Initiative at FAS, and undergraduates engaged with the Intellectual Vitality Initiative, both of which promote constructive conversations within Harvard College.
“Alfred and Rebecca’s support will help foster the practice and study of civil discourse in our classrooms and on our campus, as well as advance innovation and discovery in AI,” said Hoekstra. “Their formative experience as students and enduring commitment to Harvard is evident in this inspiring gift.”
The gift marks the Lins’ continued commitment to the University over three decades, and comes in celebration of their 30th Harvard College reunion. The new donation is part of a larger contribution from the Class of 1994, which this year set the record for highest grossing 30th reunion campaign in Harvard College history. A total of 599 members of the class donated more than $200 million.
Rebecca and Alfred Lin.
The Lin gift will endow two Alfred and Rebecca Lin Professorships in civil discourse, and the Alfred and Rebecca Lin Professor in artificial intelligence in the Harvard John A. Paulson School of Engineering and Applied Sciences. The Lins’ gift will also launch the Edgerley Family Dean’s Innovation Fund for generative AI.
“As I like to say, Harvard students often strive to do ‘both/and’ rather than settling for ‘either/or.’ Alfred and Rebecca have demonstrated that spirit of possibility beautifully with their latest act of generosity,” said President Alan Garber. “By dedicating their support to civil discourse and artificial intelligence, they are both strengthening the foundation of our campus culture and pushing the boundaries of our teaching practices. Progress in these two areas is fundamental to our future as a University. I am deeply grateful for the support of the Lins and their vote of confidence in Harvard.”
“We came to Harvard with strong values. Some of those values were challenged; some of them were reaffirmed; and we believe that it continues to be a special place where dialogue moves important ideas forward,” Alfred Lin said.
“Alfred and I have tried to support Harvard when we could or consistently, but we also believe in supporting Harvard when times are challenging, and we want to help during those times,” Rebecca Lin said.
The Lins hope their gift will help support an environment where people can “disagree and not be disagreeable.” Alfred recalled auditing “Justice,” a government course previously taught by Michael Sandel, the Anne T. and Robert M. Bass Professor of Government, and Harvey Mansfield, William R. Kenan Jr. Professor of Government, Emeritus (and recently renewed by Sandel). He remembers the two professors taking opposing political views on controversial topics in an effort to find truth.
Alfred and Rebecca Lin at Harvard in 1994.
“They would argue the extreme sides. They were never disagreeable, and they would always make you think,” Alfred said. “We modeled what we learned about social discourse in ‘Justice’ or other classes when we were just talking around the table at Quincy Grille.”
The Lins’ endowment of two professorships on civil discourse builds on these memories. The Alfred and Rebecca Lin Professor of Civil Discourse will recognize, for a five-year term, faculty who have made significant contributions, whether through teaching, advising, or mentoring, to fostering students’ ability to engage in meaningful dialogue. The second Lin professorship will support a faculty member whose research and teaching focuses on civil discourse and dialogue, ethics, academic freedom, and freedom of speech.
Alfred studied applied mathematics while Rebecca concentrated in physics at the College. The couple’s earlier gifts gravitated toward supporting financial aid for undergraduates pursuing applied sciences and engineering. Their interest in advancing computer science and artificial intelligence is reflected in their new gifts of the SEAS professorship and to the FAS Dean’s Innovation Fund, which is meant to foreground the importance of integrating generative AI tools into teaching and learning.
Alfred is a partner at Sequoia Capital, where he invests in early stage companies in financial tech, robotics, and healthcare, among others. He sits on several boards, including Airbnb, DoorDash, Houzz, and Zipline. Rebecca is an artist with storyboard credits on various Walt Disney Animation Studios television series and on the feature film “Recess, School’s Out.” She serves on the board of trustees at the California Academy of Sciences and the UCSF Foundation. The couple previously worked at Zappos, where Alfred served as Chairman and COO and Rebecca managed real estate.
Significant decline in sexual misconduct at Harvard, survey finds
Most students are aware of reporting mechanisms and support services, but many do not use resources
6 min read
In April, the University invited all degree-seeking students to participate in the Higher Education Sexual Misconduct and Awareness (HESMA) survey. In a message to Harvard affiliates on Monday, President Alan Garber announced the release of the results.
The HESMA survey, which was conducted by a consortium of 10 universities, was the third in a series Harvard has used to understand and address issues related to sexual assault, misconduct, and harassment on campus. The first two were held in 2015 and 2019.
The new data shows a statistically significant decline in sexual misconduct at the University since the 2019 survey and indicates that a majority of bystanders who witnessed misconduct intervened. The results also show a high level of awareness of the reporting mechanisms and support services offered to those who have experienced sexual assault or harassment. Still, a significant number of students reported that they did not utilize resources following incidents.
These and other survey results will inform sexual assault and misconduct prevention practices and resource allocation.
The Gazette sat down with Peggy Newell, vice president and deputy to the president, and Kathleen McGinn, principal investigator for Harvard’s HESMA survey and Baker Foundation Professor at Harvard Business School, to discuss the findings.
Why is it important for the University to continue to gather data in this area?
Newell: We know from the research that we cannot rely on individual disclosures of reports of sexual harassment and assault to determine the prevalence of harm that is occurring within our community. Climate surveys, such as the Association of American Universities (AAU) surveys conducted in 2015 and 2019 and the HESMA survey conducted this year, offer important and reliable information that we can then use to develop strategies and resources for responding to this public health issue.
“The survey is long, and the questions touch on very sensitive subject matter, yet our students clearly were seeking to have their voices heard.”
Peggy Newell
Peggy Newell.
Harvard file photo
What are the most important findings from this year’s HESMA survey?
Newell: As a starting point, we are grateful to our students for taking the time to participate in this critically important research. The survey is long, and the questions touch on very sensitive subject matter, yet our students clearly were seeking to have their voices heard, with over 35 percent of Harvard students participating in the 2024 survey. The overall data show that the prevalence of all forms of non-consensual sexual contact, harassing behavior, and sexual harassment are lower in in 2024 than in prior rounds of the survey.
McGinn: The prevalence of sexual misconduct experienced by Harvard students was lower in spring 2024 than in spring 2019, but our students continue to experience sexual harassment and sexual assault. Even one incident of sexual assault on our campus is too many. In the large majority of incidents, students reported other students as the people responsible for the sexual misconduct. While our students are knowledgeable about support resources available on campus, these data show that very few students who experience sexual harassment or assault seek support from Harvard resources or programs. On a more positive note, students who observe behavior they believe could lead to sexual harassment or assault are very likely to intervene.
Newell: Having this information creates an opportunity for us to better understand the prevalence of sexual violence at Harvard currently and to use the data to inform our efforts to prevent harm. It is invaluable to be able to compare changes over the past four years to understand where and how our efforts have or have not made an impact. Additionally, it can help us work on lowering the barriers to seeking support when someone has experienced sexual violence. This is an area that the University is committed to exploring further so that we may better understand what resources and support are needed.
Since the original AAU survey in 2015, there have been many changes to resources at the University, including the formation of the Office for Gender Equity. For example, the SHARE [Sexual Harassment/Assault Resources and Education] Team is made up of trained counselors to offer trauma-informed counseling, groups, and advocacy for students, staff, faculty, and post-doctoral fellows, and has hired a restorative practitioner. We have also maintained a network of local Title IX Resource Coordinators across the Schools and central administration to provide individual supports to community members impacted by sexual harassment or other sexual misconduct to enable them to access their work or studies.
What trends do you see around the circumstances of assault and harassment cases?
McGinn: As in years past, the majority of undergraduate students who experienced sexual assault while at Harvard report that incidents of sexual assault begin either in on-campus housing or at on-campus social events and involve alcohol; the majority of assaults take place in on-campus housing. For graduate students, incidents of sexual assault are more likely to begin in off-campus social settings. We hope raising awareness of the circumstances around sexual assault will increase bystander awareness and reduce the likelihood of sexual assault in these settings.
“We hope sharing and talking about the survey results communicates that every single incident of sexual assault experienced by students at Harvard is serious and unacceptable.”
Kathleen McGinn
Kathleen McGinn.
Courtesy photo
How would you want the results from this survey to help the Harvard community?
McGinn: The first priority is to stop sexual assault and harassment. As a community, we need to speak more frequently and openly about sexual assault and harassment to change long-standing cultural factors that normalize unacceptable, damaging behavior.
Newell: A critical part of this communication involves engagement with our faculty and staff. To encourage this dialogue, in the coming weeks we are launching an updated version of the required eLearning course addressing sexual harassment and other sexual misconduct. This course will serve as a supplement to expanded in-person training that we are offering across the community. This is an important conversation for every member of our community.
McGinn: In addition, Harvard needs to do a better job supporting students who experience sexual harassment or assault. One of the top reasons students provide for not accessing support after being sexually assaulted is that they believe what happened to them is “not serious enough.” We hope sharing and talking about the survey results communicates that every single incident of sexual assault experienced by students at Harvard is serious and unacceptable.
No one at Harvard University should ever have to experience sexual violence, intimate partner violence, sexual harassment, or stalking. If any Harvard community member needs support, there are options. If you would like to reach the confidential SHARE Team, please email oge_SHARE@harvard.edu or call 617.496.5636. If you would like to reach Title IX, please email oge_TitleIX@harvard.edu. For more information, please visit oge.harvard.edu/options.
Despite their impressive capabilities, large language models are far from perfect. These artificial intelligence models sometimes “hallucinate” by generating incorrect or unsupported information in response to a query.
Due to this hallucination problem, an LLM’s responses are often verified by human fact-checkers, especially if a model is deployed in a high-stakes setting like health care or finance. However, validation processes typically require people to read through long documents cited by the model, a task so onerous and error-prone it may prevent some users from deploying generative AI models in the first place.
To help human validators, MIT researchers created a user-friendly system that enables people to verify an LLM’s responses much more quickly. With this tool, called SymGen, an LLM generates responses with citations that point directly to the place in a source document, such as a given cell in a database.
Users hover over highlighted portions of its text response to see data the model used to generate that specific word or phrase. At the same time, the unhighlighted portions show users which phrases need additional attention to check and verify.
“We give people the ability to selectively focus on parts of the text they need to be more worried about. In the end, SymGen can give people higher confidence in a model’s responses because they can easily take a closer look to ensure that the information is verified,” says Shannon Shen, an electrical engineering and computer science graduate student and co-lead author of a paper on SymGen.
Through a user study, Shen and his collaborators found that SymGen sped up verification time by about 20 percent, compared to manual procedures. By making it faster and easier for humans to validate model outputs, SymGen could help people identify errors in LLMs deployed in a variety of real-world situations, from generating clinical notes to summarizing financial market reports.
Shen is joined on the paper by co-lead author and fellow EECS graduate student Lucas Torroba Hennigen; EECS graduate student Aniruddha “Ani” Nrusimha; Bernhard Gapp, president of the Good Data Initiative; and senior authors David Sontag, a professor of EECS, a member of the MIT Jameel Clinic, and the leader of the Clinical Machine Learning Group of the Computer Science and Artificial Intelligence Laboratory (CSAIL); and Yoon Kim, an assistant professor of EECS and a member of CSAIL. The research was recently presented at the Conference on Language Modeling.
Symbolic references
To aid in validation, many LLMs are designed to generate citations, which point to external documents, along with their language-based responses so users can check them. However, these verification systems are usually designed as an afterthought, without considering the effort it takes for people to sift through numerous citations, Shen says.
“Generative AI is intended to reduce the user’s time to complete a task. If you need to spend hours reading through all these documents to verify the model is saying something reasonable, then it’s less helpful to have the generations in practice,” Shen says.
The researchers approached the validation problem from the perspective of the humans who will do the work.
A SymGen user first provides the LLM with data it can reference in its response, such as a table that contains statistics from a basketball game. Then, rather than immediately asking the model to complete a task, like generating a game summary from those data, the researchers perform an intermediate step. They prompt the model to generate its response in a symbolic form.
With this prompt, every time the model wants to cite words in its response, it must write the specific cell from the data table that contains the information it is referencing. For instance, if the model wants to cite the phrase “Portland Trailblazers” in its response, it would replace that text with the cell name in the data table that contains those words.
“Because we have this intermediate step that has the text in a symbolic format, we are able to have really fine-grained references. We can say, for every single span of text in the output, this is exactly where in the data it corresponds to,” Torroba Hennigen says.
SymGen then resolves each reference using a rule-based tool that copies the corresponding text from the data table into the model’s response.
“This way, we know it is a verbatim copy, so we know there will not be any errors in the part of the text that corresponds to the actual data variable,” Shen adds.
Streamlining validation
The model can create symbolic responses because of how it is trained. Large language models are fed reams of data from the internet, and some data are recorded in “placeholder format” where codes replace actual values.
When SymGen prompts the model to generate a symbolic response, it uses a similar structure.
“We design the prompt in a specific way to draw on the LLM’s capabilities,” Shen adds.
During a user study, the majority of participants said SymGen made it easier to verify LLM-generated text. They could validate the model’s responses about 20 percent faster than if they used standard methods.
However, SymGen is limited by the quality of the source data. The LLM could cite an incorrect variable, and a human verifier may be none-the-wiser.
In addition, the user must have source data in a structured format, like a table, to feed into SymGen. Right now, the system only works with tabular data.
Moving forward, the researchers are enhancing SymGen so it can handle arbitrary text and other forms of data. With that capability, it could help validate portions of AI-generated legal document summaries, for instance. They also plan to test SymGen with physicians to study how it could identify errors in AI-generated clinical summaries.
This work is funded, in part, by Liberty Mutual and the MIT Quest for Intelligence Initiative.
The EU AI Act is designed to ensure that AI is transparent and trustworthy. For the first time, ETH computer scientists have translated the Act into measurable technical requirements for AI. In doing so, they have shown how well today's AI models already comply with the legal requirements.
The National University of Singapore's School of Computing (NUS Computing) has entered into a partnership with FPT, a leading global technology corporation based in Vietnam, to advance the field of artificial intelligence (AI). This collaboration plans a joint investment of US$50 million, to be contributed by FPT, NUS, and other key players in the local and regional AI ecosystems over the next five years, aiming to drive pioneering research in AI and enhance talent development.
This partnership will not only strengthen FPT's capacity to commercialise AI solutions and improve its R&D capabilities but also foster the development of top-tier AI workforce, thereby enhancing its competitive advantage in the APAC region and beyond.
New AI Lab to propel collaborative research, innovation, and commercialisation initiatives
A key focus of this partnership is the establishment of a state-of-the-art AI Lab. Combining the strengths of NUS’ research and FPT’s industry expertise, the new AI Lab will accelerate cutting-edge research in diverse domains of AI, including machine learning, data analytics, natural language processing, and computer vision, benefiting Singapore, the Asia Pacific region, and beyond.
Hosted at NUS Computing, the new AI Lab will be part of the University’s dynamic AI ecosystem, collaborating with the NUS AI Institute (NAII) – which brings together AI researchers and expertise across the University. The new AI Lab’s innovative research projects will focus on AI and automation, emphasising real-world applications in various industries, such as banking and insurance, logistics and transportation, aviation and airline, energy and utilities, manufacturing, and more. In addition, the new AI Lab will produce joint research papers, case studies, and white papers for publication in internationally recognised journals and conferences, sharing findings with the academic and business communities.
NUS Computing and FPT will also explore opportunities to commercialise AI-driven solutions, including the joint development of AI products, services, and platforms for global markets. By focusing on real-world challenges, the AI Lab will harness the potential of AI to drive positive advancements in sectors critical to Singapore’s development and global progress.
Building capacity and boosting AI talents
Talent development is another cornerstone of the partnership. FPT and NUS Computing will conduct joint programmes, such as internships, workshops, training courses, and PhD research opportunities, to nurture AI talents. These initiatives will cultivate a pool of highly skilled professionals equipped with the skills and expertise to lead future advancements in AI and automation across Singapore and the wider APAC region.
Driving Innovation Together
FPT Corporation Founder and Chairman Dr Truong Gia Binh said, “FPT believes AI is a pivotal accelerator in shaping the future. For more than a decade, FPT has been actively pursuing AI research and development to stimulate innovations and has integrated AI into all our services and solutions. We also invested heavily in the training and development of an AI-ready workforce. The close partnership with the renowned NUS can help us harness AI power to drive mutual growth and success not only in Singapore and Vietnam but globally.”
Mr David Nguyen, FPT Asia Pacific Chief Executive Officer, emphasised the strategic significance of the collaboration: "The establishment of the AI Lab in Singapore is a cornerstone of our partnership, where we will develop groundbreaking solutions to address challenges across industries that are critical to the region’s growth and global competitiveness such as healthcare, banking and insurance, logistics and transportation, aviation and airline, energy and utilities, manufacturing, and more. By leveraging each other’s expertise, we aim to accelerate innovation and drive impactful results locally, in the Asia-Pacific region, and worldwide."
Professor Tan Kian Lee, Dean of NUS Computing, said, “This synergistic partnership brings together the complementary strengths of NUS Computing and FPT. We aim to bring innovative AI solutions to real-world challenges, and at the same time, contributing to the AI ecosystem in Singapore and globally through the development of a highly skilled AI workforce.”
FPT has over a decade of experience in AI research and development. Most recently, it announced investments of US$174 million to establish an AI centre in Binh Dinh, Vietnam, and a plan to invest US$200 million to develop an AI factory utilising NVIDIA’s advanced graphics chips and software. These AI initiatives are further boosted by extensive global partnerships with leading AI players such as NVIDIA, Landing AI, AITOMATIC, and the founding membership of the AI Alliance led by IBM and Meta. The tech firm also boasts an AI workforce of over 1,500 engineers, with additional resources of 1,300 FPT University students majoring in AI annually. Its AI Residency programme, established in collaboration with Mila Quebec AI Institute, also actively cultivates the next generations of AI talents.
On the other hand, NUS boasts strong capabilities in AI research. The University has forged strong connections with government agencies, industry, and international partners through various AI initiatives. To enhance its influence in the AI landscape, NUS launched NAII in March 2024, which focuses on both fundamental and applied research in AI, as well as explores the societal implications of AI.
The Cambridge-GSK Translational Immunology Collaboration (CG-TIC) combines University and GSK expertise in the science of the immune system, AI and clinical development with access to patients and their data provided by Cambridge University Hospitals.
GSK is investing more than £50 million in CG-TIC, further strengthening Cambridge’s position as Europe’s leading life sciences cluster.
GSK plc is making this investment to establish the Cambridge-GSK Translational Immunology Collaboration (CG-TIC), a five-year collaboration with the University of Cambridge and Cambridge University Hospitals. The collaboration is focused on understanding the onset of a disease, its progression, how patients respond to therapies and on developing biomarkers for rapid diagnosis. Ultimately, the goal is to trial more effective, personalised medicines.
The collaboration will focus on kidney and respiratory diseases, both of which affect large numbers of people worldwide. Kidney disease is estimated to affect 850 million people (roughly 10% of the world’s population) (International Society of Nephrology) and chronic respiratory diseases around 545 million (The Lancet).
Many types of kidney disease remain poorly understood and treatments, where they exist, tend to have limited efficacy. Chronic kidney disease is particularly unpleasant and debilitating for patients, often leading to end-stage disease. Treatments such as transplant and dialysis involve complex medical regimes and frequent hospital visits, making effective prevention and treatment the aim.
To make progress in treating these challenging disease areas, CG-TIC will apply an array of new techniques, including the use of cutting-edge single cell technologies to characterise how genes are expressed in individual cells. AI and machine learning have a critical role to play in transforming how data is combined and interrogated.
Using these techniques, the ambition is to be able to initiate new studies and early phase trials of new therapies for a number of hard-to-treat diseases which affect the kidneys. The same techniques will be applied to respiratory diseases and findings will be shared across the disease areas potentially to help identify and share better treatments across these different targets.
Peter Kyle, Secretary of State for Science, Innovation and Technology, welcomed the collaboration: "The UK's life sciences industry is thriving, driving innovation and improving lives. This collaboration between GSK and the University of Cambridge demonstrates our country's leading research and development capabilities.
“By focusing on cutting-edge research and harnessing the power of AI, this has the potential to advance the treatment of immune-related diseases, which could benefit patients both here in the UK and internationally. It's a clear example of how collaboration between industry, academia, and healthcare can deliver tangible results and strengthen the UK's position in healthcare innovation."
Tony Wood, Chief Scientific Officer, GSK, added: “Collaboration is at the heart of scientific progress and is fundamental to how we do R&D at GSK. We’re excited to build on our existing work with the University of Cambridge to further this world-leading scientific and technological capability in the UK. By bringing together Cambridge’s expertise and our own internal capabilities, including understanding of the immune system and the use of AI to accelerate drug development, we have an opportunity to help patients struggling with complex disease.”
The aim of CG-TIC is to improve outcomes for patients and Cambridge provides a unique environment in which to involve them, with Cambridge University Hospitals playing a pivotal role in the collaboration and Royal Papworth Hospital NHS Foundation Trust, the UK’s leading heart and lung hospital, a likely future partner.
Home to the hospitals and to much of the collaboration’s research activity, the Cambridge Biomedical Campus provides a unique environment where academia, industry and healthcare can come together and where human translational research is supported by the National Institute for Health and Care Research (NIHR) Cambridge Biomedical Research Centre.
Professor Deborah Prentice, Vice-Chancellor of the University of Cambridge, said: “The University sits at the heart of Europe’s leading life sciences cluster, where excellent research and the NHS’s clinical resources combine with the talent generated by the many innovative bioscience companies that call Cambridge home. Through this very important collaboration with GSK, Cambridge will be able to drive economic growth for the UK while improving the health of people in this country and around the world.”
Roland Sinker, CEO of Cambridge University Hospitals NHS Foundation Trust, also welcomed the collaboration, saying: “We are very excited to be part of this important partnership, which is another example of Cambridge experts working together to develop transformational new therapies, and use existing ones more precisely, to improve outcomes for patients with chronic and debilitating conditions.”
The Cambridge-GSK Translational Immunology Collaboration will be co-led by Nicolas Wisniacki, VP, Clinical Research Head, GSK (above left) and David Thomas, Professor of Renal Medicine, University of Cambridge and principal investigator at the Cambridge Institute for Therapeutic Immunology and Infectious Diseases.
The ambition of the partnership is to treat immune-related diseases more precisely with existing therapies and to rapidly develop new ones.
The UK's life sciences industry is thriving, driving innovation and improving lives. This collaboration between GSK and the University of Cambridge demonstrates our country's leading research and development capabilities.
Peter Kyle, Secretary of State for Science, Innovation and Technology
First lady of Ukraine Olena Zelenska (center) takes in the Widener Memorial Room with curator Peter X. Accardo and University Librarian Martha Whitehead.
Olena Zelenska presents Harvard Library with books, shows appreciation for its contribution to Ukrainian studies
When Olha Aleksic came to Harvard as a graduate student from Ukraine, it was to study the history of Christianity at the Divinity School. During an internship in Widener Library’s Slavic division, she discovered her passion for libraries, which then led to a career in collection development and a reference librarianship working with Ukrainian materials.
Nearly 20 years later, it was Aleksic, now Harvard Library’s Ukrainian bibliographer, who greeted the first lady of Ukraine, Olena Zelenska, at the Widener. It was a key stop in Zelenska’s visit to the University, which was celebrating the 50th anniversary of the Harvard Ukrainian Research Institute.
During the Sept. 24 visit, Aleksic introduced Zelenska to Harvard Library’s Ukrainian collections, one of the largest and most comprehensive outside of Europe. Among the valuable Ukrainian items in Harvard’s collections are Ivan Fedorov’s “Apostol” and “Primer” (1574), the first books printed in Ukraine.
“The first lady specifically chose to visit Harvard Library because libraries are places where history can be preserved,” noted Aleksic.
Zelenska was officially welcomed in the Widener Rotunda by Martha Whitehead, vice president for the Harvard Library and University librarian.
“We see our collections as a vital resource for Harvard’s Ukrainian Studies programs and as a treasure trove for the distinguished visiting scholars who come to work and study at the Ukrainian Research Institute,” said Whitehead. “Our Ukrainian collections will tell the story of Ukraine and its people far into the future.”
Zelenska noted her deep appreciation for Harvard Library’s contribution to Ukrainian Studies, presenting Whitehead with “Ukraine and Ukrainians,” an art book by Ivan Honchar, to add to Harvard’s collection.
To add to Harvard Library’s collection, Zelenska presents Martha Whitehead with “Ukraine and Ukrainians,” which depicts the country’s story.Zelenska gave Harvard Library three damaged books rescued from a ruined printing warehouse in Kharkiv. They will be carefully conserved by the library’s Preservation Services.
“This is a very important volume that provides in-depth information about Ukraine and Ukrainians. We’re so pleased to gift it to Harvard Library,” Zelenska said.
Accepting the books on behalf of the library, Whitehead emphasized its longstanding commitment to its global collections and expanding world knowledge.
“We collect and preserve global voices for present and future generations of scholars,” she said. “Our collections are a vital resource for Harvard’s Ukrainian Studies programs, and we want the full range of Ukrainian thought and experience to be represented here.”
Zelenska also gave Harvard Library three war-damaged books: a children’s book by Oleksandr (Sashko) Dermanskyi; a novel translated from English to Ukrainian by Heather Gudenkauf; and an autobiography by Pavlo Belianskyi. The books were rescued from a ruined warehouse of the Faktor Druk printing house in Kharkiv, which was struck by a missile in May 2024.
Conservators at Harvard Library’s Preservation Services are currently creating custom enclosures for the books to preserve and protect them against further damage. Part of Harvard Library’s mission is to preserve knowledge for the future. It has long rescued and preserved books and cultural materials from places where they are endangered.
“One of our key strategic priorities is preserving knowledge for the future,” said Whitehead. “We have a fabulous team in our Collections Care Lab who will ensure the best care of these books, and our creative librarians will find opportunities to have our users experience their impact.”
Among her other stops, Zelenska also spoke at Harvard’s Ukrainian Research Institute. To view the video, visit its website.
Ethics Center Director Eric Beerbohm (from left) with moderator Christopher Robichaud and panelists Shruti Rajagopalan, Tom Malleson, Jessica Flanigan, and Nien-hê Hsieh.
Are rich different from you and me? Would we be better off without them?
Christy DeSmith
Harvard Staff Writer
6 min read
Safra Center for Ethics debate weighs extreme wealth, philanthropy, income inequality, and redistribution
Billionaires devote vast sums of money to anti-poverty initiatives and green energy reforms. But the world’s wealthiest also cause disproportionate harm to the environment.
A rousing debate, hosted last week by the Edmond and Lily Safra Center for Ethics, wrestled with the issues of extreme wealth and growing income inequality. Panelists representing fields including philosophy, political economy, and business administration staked contrasting and occasionally unexpected positions on whether the super-rich are a net positive for society.
“The top 1 percent emit the same amount of carbon as 5 billion human beings,” said Tom Malleson, associate professor of social justice and peace studies at King’s University College at Western University in Ontario, Canada. “The best thing you can do is to get rid of those billionaires by redistributing the wealth, particularly if you redistribute it to green technology.”
“The best thing you can do is to get rid of those billionaires by redistributing the wealth, particularly if you redistribute it to green technology.”
Tom Malleson
But billionaires like Bill Gates have invested in poor countries ravaged by climate disaster, argued Jessica Flanigan, Richard L. Morrill Chair in Ethics and Democratic Values at the University of Richmond. Market forces further incentivize the world’s wealthiest to provide jobs and pursue improvements to clean energy infrastructure, she added.
“Those are all presumptive reasons to think that billionaires are helpful toward the global poor and more reliably beneficial to those people than public officials, who are beholden to people in their own political community” who usually are not badly off.
Moderator Christopher Robichaud, a senior lecturer in ethics and public policy at the Harvard Kennedy School of Government, kicked off the conversation by citing recent reports that Elon Musk, CEO of Tesla and SpaceX, is poised to become history’s first trillionaire. “What should we think about a world, maybe right around the corner, that has trillionaires?” he asked.
“Could you imagine a society or a set of institutions in which it would be perfectly just for there to be trillionaires?” asked Nien-hê Hsieh, Kim B. Clark Professor of Business Administration at Harvard Business School. “It probably has features to ensure people’s basic needs are met. It probably has features to ensure that great inequality doesn’t lead to the corruption of public officials, or the breaking of the democratic fabric … Whatever that system is, it is not the system we have today.”
Shruti Rajagopalan, a senior research fellow at George Mason University’s Mercatus Center, noted how few of today’s billionaires inherited their money. Most of the modern era’s richest people earned their fortunes, she emphasized, largely through the stock market.
“If Elon Musk is getting wealthy, it’s also every single schoolteacher out there whose retirement is invested in one of these funds.”
Shruti Rajagopalan
“There’s a big difference between Genghis Khan and Elon Musk,” she quipped. “And if Elon Musk is getting wealthy, it’s also every single schoolteacher out there whose retirement is invested in one of these funds.”
At one point, Malleson highlighted the role of luck in wealth creation.
“If you have a more productive body — if you have Michael Phelps’ wingspan or Taylor Swift’s voice — good for you,” they said, citing the late Harvard philosophy professor John Rawls’ writings on the arbitrary nature of these traits. “We should think of meritocracy as part of a doctrine of ableism. It’s a prejudice doctrine that says people should be rewarded for factors that are outside their control and others should be punished — particularly disabled people — for their lack of productivity.”
The conversation expanded to cover big business and low-wage workers, with Walmart proving a favorite lightning rod.
“The poorest families in the United States want to shop at Walmart because the prices are going to be the best,” Rajagopalan noted. “Your stereotypical single mother — trying to feed her children, trying to keep very difficult hours at her job — can walk into a Walmart and get all the basics cheaper than pretty much anywhere else.”
But the company “exploits its employees, crushes unions, and takes out dead peasant insurance policies on workers who are going to die,” Malleson countered. “Its products are cheap because they’re made in sweatshops with abysmal conditions.”
The solution is not so easy as taxing Walmart for redistribution to low-income employees and consumers, Rajagopalan said. “Then we’re assuming there’s a bureaucrat or central allocation plan that can provide those same loaves of bread and cans of milk. … That has been done before and hasn’t worked pretty much anywhere in the world.”
“The kind of society that produces billionaires is the very same society that’s going to improve conditions for the worst off.”
Jessica Flanigan
Nobody is talking about communist-style central planning, Malleson said. Alternatives include some form of democratic socialism.
“It would mean Walmart has unions,” they said. “It would mean Walmart has co-determination, where workers are allowed to elect half the board like in Germany. It would mean there are basic labor conditions; there are basic rights and regulations that are very common in many parts of the world, particularly the Nordic countries.”
Sweden has more billionaires per capita than the U.S., Flanigan pointed out, because it still has a market economy that generates sufficient wealth for financing its public institutions.
“How do we materially improve the conditions of the worst off? The best thing we have is a market-based society that encourages investment and innovation; that’s it!” she said. “And the kind of society that produces billionaires is the very same society that’s going to improve conditions for the worst off.”
Hsieh chimed in with yet another option for curbing inequality.
“As somebody here at Harvard who is a good Rawlsian, I want to put forward the idea of property-owning democracy,” he said. “There’s an idea where you do allow for market exchange. You do allow for the private accumulation of wealth. You do allow for private capital … but with a much more egalitarian distribution of property.”
In the discussion’s final minutes, Robichaud picked up on one of their questions about what constitutes a minimum standard of living. He asked panelists how each would propose meeting such a standard for all.
“To lift the poor, we shouldn’t just look at taxation,” Rajagopalan said. The world’s poorest 2 billion people are “living in conditions that are entirely unjust. … There won’t be many takers for this politically, but the single best way to improve their lives is to allow immigration to rich countries.”
Jill Lepore, Maya Jasanoff, Kirsten Weld launch course that views present as wholly connected to the past
The media keep calling the 2024 election “unprecedented,” said political historian Jill Lepore. They said the same thing about contests for U.S. president in 2016 and 2020.
“Since, I would say, the [2000] Bush v. Gore election, our political discourse is about falling off a cliff,” Lepore, the David Woods Kemper ’41 Professor of American History, told her students recently. “It has a weird torquing effect on how people experience daily life.”
Lepore is one of three well-known scholars teaching the brand-new “HIST 10: A History of the Present” this semester. Co-created with professors Maya Jasanoff and Kirsten Weld, the introductory course uncovers historic concepts and clichés that anchor perceptions of today.
“We’re going to approach the present in this classroom as historians — as people concerned with how individuals and collectives situate themselves in historical time and, most importantly, how they make meaning of it,” promised Weld, a Canadian-born historian of modern Latin America.
Jasanoff, the X.D. and Nancy Yang Professor of Arts and Sciences and Coolidge Professor of History, kicked off the first lecture with some history on the course itself. HIST 10 was once offered as a yearlong survey for actual and prospective concentrators focused on the Western world, especially Europe, she explained. But that framing slowly fell out of favor with students and faculty alike until the course was finally canceled in 2006.
“But we felt, and continue to feel, that history is something we all need to be educated in and conscious of,” said Jasanoff, a historian of the British Empire. “And so the three of us started talking about re-presenting a gateway course to revivify the sense that history is integrated into our consciousness, our lives, our society, our government, and much more.”
They designed a lecture course broken into three modules, each informed by individual interests and expertise. First Jasanoff is interrogating evolving definitions of ancestry. Lepore, who is also a Harvard Law School professor and New Yorker staff writer, will then lead a section on rights. And Weld will wrap things up with a unit on memory.
“What really drew me to the class were these conceptual frameworks,” said Victoria Rengel ’28, a Newark, N.J., native considering a joint concentration in government and history. “As the professors laid out in the first class, most human conflict can be broken down and understood through one of the three.”
Enrolled in the course are about 60 undergraduates, plus a sizeable contingent of auditors. “It’s packed,” observed A.J. Moyeda ’27, a history concentrator with a secondary in philosophy from South Texas. “I never imagined a history course with this many students.”
Mondays begin with scholarly takes on the students’ anonymously submitted questions. Wednesdays feature all three professors engaged with (and often debating) daily news. The first few conversations left Jacqueline Metzger ’27, a joint concentrator in history and in Theater, Dance & Media from the Washington, D.C., area, rethinking how election coverage is packaged.
“I really liked the conversation about things being unprecedented,” Metzger said. “It’s kind of an intimidating term, because it makes you feel we’re unequipped to handle what’s going on.”
Most of Day One was focused on the news cycle — a key tool for making sense of the present. Jasanoff invited everyone to pull out their laptops for some live polling on media habits and current events. Newspapers, social media, and word of mouth/group chats emerged as the go-to sources. Topping a ranking of 2024 issues were “the election,” “political polarization,” and “Palestine.”
Picking up on one of these topics, Weld demonstrated how students might think about the war in Gaza using her module. In the Middle East and elsewhere, perspectives on the conflict are “indelibly and inseparably framed,” she said, by historical memories of two traumatic world events: the Holocaust, when 6 million Jews were killed by the Nazis, and the Nakba of 1948, when 750,000 Palestinian Arabs were displaced to create the modern State of Israel.
“How one remembers those events — both within one’s own families as well as politically speaking — makes up a constitutive building block of your interpretations of the present,” Weld said.
The 75-minute session ended with Lepore lending rich historical insight to the 21st century’s confusing swirl of fact, fiction, and information technology. She started with the “long adversarial tradition” between America’s newspapers and its political leaders, beginning in the 1720s with the dogged New England Courant (published by Benjamin Franklin’s brother James).
This year’s candidates for U.S. president are hardly the first to try reaching voters without risking criticism by newspaper reporters. Lepore offered examples that bridge past and present including rallies, political postering, and President Franklin D. Roosevelt’s direct-to-listener radio broadcasts from the 1930s.
“It’s cool to see that much of what we’re dealing with has happened again and again,” Metzger said. “It’s such a grounded way to understand who we are and where we are in history.”
At one point, Lepore shared an advertising clip from President Dwight D. Eisenhower’s 1956 re-election campaign. It opens with an animated figure anguished over the firehose of political information from TV, radio, magazines, and newspapers.
“I’ve listened to everybody,” he cries. “Who’s right? What’s right? What should I believe? What are the facts? How can I tell?”
“Those are questions we still have,” Lepore emphasized. “They’re endemic to the age of mass communication we have been in for well more than a century. I hope that gives you some comfort.”
A study from Weill Cornell Medicine provides new insights into a pair of proteins and their opposing functions in regulating the interferon response in hepatic stellate cells, a critical immune component in the liver’s fight against tumors.
Interim President Michael I. Kotlikoff invoked history – Cornell’s and his own – in his first State of the University address, delivered Oct. 18 in Call Auditorium during the Trustee-Council Annual Meeting.
The much-touted arrival of “precision medicine” promises tailored technologies that help individuals and may also reduce health care costs. New research shows how pregnancy screening can meet both of these objectives, but the findings also highlight how precision medicine must be matched well with patients to save money.
The study involves cfDNA screenings, a type of blood test that can reveal conditions based on chromosomal variation, such as Down Syndrome. For many pregnant women, though not all, cfDNA screenings can be an alternative to amniocentesis or chorionic villus sampling (CVS) — invasive procedures that come with a risk of miscarriage.
In examining how widely cfDNA tests should be used, the study reached a striking conclusion.
“What we find is the highest value for the cfDNA testing comes from people who are high risk, but not extraordinarily high risk,” says Amy Finkelstein, an MIT economist and co-author of a newly published paper detailing the study.
The paper, “Targeting Precision Medicine: Evidence from Prenatal Screening,” appears in the Journal of Political Economy. The co-authors are Peter Conner, an associate professor and senior consultant at Karolinska University Hospital in Sweden; Liran Einav, a professor of economics at Stanford University; Finkelstein, the John and Jennie S. MacDonald Professor of Economics at MIT; and Petra Persson, an assistant professor of economics at Stanford University.
“There is a lot of hope attached to precision medicine,” Persson says. “We can do a lot of new things and tailor health care treatments to patients, which holds a lot of promise. In this paper, we highlight that while this is all true, there are also significant costs in the personalization of medicine. As a society, we may want to examine how to use these technologies while keeping an eye on health care costs.”
Measuring the benefit to “middle-risk” patients
To conduct the study, the research team looked at the introduction of cfDNA screening in Sweden, during the period from 2011 to 2019, with data covering over 230,000 pregnancies. As it happens, there were also regional discrepancies in the extent to which cfDNA screenings were covered by Swedish health care, for patients not already committed to having invasive testing. Some regions covered cfDNA testing quite widely, for all patients with a “moderate” assessed risk or higher; other regions, by contrast, restricted coverage to a subset of patients within that group with elevated risk profiles. This provided variation the researchers could use when conducting their analysis.
With the most generous coverage of cfDNA testing, the procedure was used by 86 percent of patients; with more targeted coverage, that figure dropped to about 33 percent. In both cases, the amount of invasive testing, including amniocentesis, dropped significantly, to about 5 percent. (The cfDNA screenings are very informative, but not fully conclusive, which invasive testing is, so some pregnant women will opt-for a follow-up procedure.)
Both approaches, then, yielded similar reductions in the rate of invasive testing. But due to the costs of cfDNA tests, the economic implications are quite different. Introducing wide coverage of cfDNA tests would raise overall medical costs by about $250 per pregnancy, the study estimates. In contrast, introducing cfDNA with more targeted coverage yields a reduction of about $89 per patient.
Ultimately, the larger dynamics are clear. Pregnant women who have the highest risk of bearing children with chromosome-based conditions are likely to still opt for an invasive test like amniocentesis. Those with virtually no risk may not even have cfDNA tests done. For a group in between, cfDNA tests have a substantial medical value, relieving them of the need for an invasive test. And narrowing the group of patients getting cfDNA tests lowers the overall cost.
“People who are very high-risk are often going to use the invasive test, which is definitive, regardless of whether they have a cfDNA screen or not,” Finkelstein says. “But for middle-risk people, covering cfDNA produces a big increase in cfDNA testing, and that produces a big decline in the rates of the riskier, and more expensive, invasive test.”
How precise?
In turn, the study’s findings raise a larger point. Precision medicine, in almost any form, will add expenses to medical care. Therefore developing some precision about who receives it is significant.
“The allure of precision medicine is targeting people who need it, so we don’t do expensive and potentially unpleasant tests and treatments of people who don’t need them,” Finkelstein says. “Which sounds great, but it kicks the can down the road. You still need to figure out who is a candidate for which kind of precision medicine.”
Therefore, in medicine, instead of just throwing technology at the problem, we may want to aim carefully, where evidence warrants it. Overall, that means good precision medicine builds on good policy analysis, not just good technology.
“Sometimes when we think medical technology has an impact, we simply ask if the technology raises or lowers health care costs, or if it makes patients healthier,” Persson observes. “An important insight from our work, I think, is that the answers are not just about the technology. It’s about the pairing of technology and policy because policy is going to influence the impact of technology on health care and patient outcomes. We see this clearly in our study.”
In this case, finding comparable patient outcomes with narrower cfDNA screenings suggests one way of targeting diagnostic procedures. And across many possible medical situations, finding the subset of people for whom a technology is most likely to yield new and actionable information seems a promising objective.
“The benefit is not just an innate feature of the testing,” Finkelstein says. “With diagnostic technologies, the value of information is greatest when you’re neither obviously appropriate or inappropriate for the next treatment. It’s really the non-monotone value of information that’s interesting.”
The study was supported, in part, by the U.S. National Science Foundation.
Taking place for the first time at Cybathlon 2024 is the Assistance Robot Race, with ETH represented by Team RSL. When paraplegic pilot Sammy Kunz navigates the course, a four-legged robot will be at his side.
Andrew Arenge of the Penn Program on Opinion Research and Election Studies has created dashboards showing geotargeted issues and spending amounts looking at the Harris and Trump campaigns.
University of Melbourne researchers have released a new cookbook to help demystify native ingredients and empower Australias home cooks to incorporate Indigenous food into their everyday meals.
Some of the most widely used drugs today, including penicillin, were discovered through a process called phenotypic screening. Using this method, scientists are essentially throwing drugs at a problem — for example, when attempting to stop bacterial growth or fixing a cellular defect — and then observing what happens next, without necessarily first knowing how the drug works. Perhaps surprisingly, historical data show that this approach is better at yielding approved medicines than those investigations that more narrowly focus on specific molecular targets.
But many scientists believe that properly setting up the problem is the true key to success. Certain microbial infections or genetic disorders caused by single mutations are much simpler to prototype than complex diseases like cancer. These require intricate biological models that are far harder to make or acquire. The result is a bottleneck in the number of drugs that can be tested, and thus the usefulness of phenotypic screening.
Now, a team of scientists led by the Shalek Lab at MIT has developed a promising new way to address the difficulty of applying phenotyping screening to scale. Their method allows researchers to simultaneously apply multiple drugs to a biological problem at once, and then computationally work backward to figure out the individual effects of each. For instance, when the team applied this method to models of pancreatic cancer and human immune cells, they were able to uncover surprising new biological insights, while also minimizing cost and sample requirements by several-fold — solving a few problems in scientific research at once.
Zev Gartner, a professor in pharmaceutical chemistry at the University of California at San Francisco, says this new method has great potential. “I think if there is a strong phenotype one is interested in, this will be a very powerful approach,” Gartner says.
The research was published Oct. 8 in Nature Biotechnology. It was led by Ivy Liu, Walaa Kattan, Benjamin Mead, Conner Kummerlowe, and Alex K. Shalek, the director of the Institute for Medical Engineering and Sciences (IMES) and the Health Innovation Hub at MIT, as well as the J. W. Kieckhefer Professor in IMES and the Department of Chemistry. It was supported by the National Institutes of Health and the Bill and Melinda Gates Foundation.
A “crazy” way to increase scale
Technological advances over the past decade have revolutionized our understanding of the inner lives of individual cells, setting the stage for richer phenotypic screens. However, many challenges remain.
For one, biologically representative models like organoids and primary tissues are only available in limited quantities. The most informative tests, like single-cell RNA sequencing, are also expensive, time-consuming, and labor-intensive.
That’s why the team decided to test out the “bold, maybe even crazy idea” to mix everything together, says Liu, a PhD student in the MIT Computational and Systems Biology program. In other words, they chose to combine many perturbations — things like drugs, chemical molecules, or biological compounds made by cells — into one single concoction, and then try to decipher their individual effects afterward.
They began testing their workflow by making different combinations of 316 U.S. Food and Drug Administration-approved drugs. “It’s a high bar: basically, the worst-case scenario,” says Liu. “Since every drug is known to have a strong effect, the signals could have been impossible to disentangle.”
These random combinations ranged from three to 80 drugs per pool, each of which was applied to lab-grown cells. The team then tried to understand the effects of the individual drug using a linear computational model.
It was a success. When compared with traditional tests for each individual drug, the new method yielded comparable results, successfully finding the strongest drugs and their respective effects in each pool, at a fraction of the cost, samples, and effort.
Putting it into practice
To test the method’s applicability to address real-world health challenges, the team then approached two problems that were previously unimaginable with past phenotypic screening techniques.
The first test focused on pancreatic ductal adenocarcinoma (PDAC), one of the deadliest types of cancer. In PDAC, many types of signals come from the surrounding cells in the tumor's environment. These signals can influence how the tumor progresses and responds to treatments. So, the team wanted to identify the most important ones.
Using their new method to pool different signals in parallel, they found several surprise candidates. “We never could have predicted some of our hits,” says Shalek. These included two previously overlooked cytokines that actually could predict survival outcomes of patients with PDAC in public cancer data sets.
The second test looked at the effects of 90 drugs on adjusting the immune system’s function. These drugs were applied to fresh human blood cells, which contain a complex mix of different types of immune cells. Using their new method and single-cell RNA-sequencing, the team could not only test a large library of drugs, but also separate the drugs’ effects out for each type of cell. This enabled the team to understand how each drug might work in a more complex tissue, and then select the best one for the job.
“We might say there’s a defect in a T cell, so we’re going to add this drug, but we never think about, well, what does that drug do to all of the other cells in the tissue?” says Shalek. “We now have a way to gather this information, so that we can begin to pick drugs to maximize on-target effects and minimize side effects.”
Together, these experiments also showed Shalek the need to build better tools and datasets for creating hypotheses about potential treatments. “The complexity and lack of predictability for the responses we saw tells me that we likely are not finding the right, or most effective, drugs in many instances,” says Shalek.
Reducing barriers and improving lives
Although the current compression technique can identify the perturbations with the greatest effects, it’s still unable to perfectly resolve the effects of each one. Therefore, the team recommends that it act as a supplement to support additional screening. “Traditional tests that examine the top hits should follow,” Liu says.
Importantly, however, the new compression framework drastically reduces the number of input samples, costs, and labor required to execute a screen. With fewer barriers in play, it marks an exciting advance for understanding complex responses in different cells and building new models for precision medicine.
Shalek says, “This is really an incredible approach that opens up the kinds of things that we can do to find the right targets, or the right drugs, to use to improve lives for patients.”
Some of the most widely used drugs today, including penicillin, were discovered through a process called phenotypic screening. Using this method, scientists are essentially throwing drugs at a problem — for example, when attempting to stop bacterial growth or fixing a cellular defect — and then observing what happens next, without necessarily first knowing how the drug works. Perhaps surprisingly, historical data show that this approach is better at yielding approved medicines than those investigations that more narrowly focus on specific molecular targets.
But many scientists believe that properly setting up the problem is the true key to success. Certain microbial infections or genetic disorders caused by single mutations are much simpler to prototype than complex diseases like cancer. These require intricate biological models that are far harder to make or acquire. The result is a bottleneck in the number of drugs that can be tested, and thus the usefulness of phenotypic screening.
Now, a team of scientists led by the Shalek Lab at MIT has developed a promising new way to address the difficulty of applying phenotyping screening to scale. Their method allows researchers to simultaneously apply multiple drugs to a biological problem at once, and then computationally work backward to figure out the individual effects of each. For instance, when the team applied this method to models of pancreatic cancer and human immune cells, they were able to uncover surprising new biological insights, while also minimizing cost and sample requirements by several-fold — solving a few problems in scientific research at once.
Zev Gartner, a professor in pharmaceutical chemistry at the University of California at San Francisco, says this new method has great potential. “I think if there is a strong phenotype one is interested in, this will be a very powerful approach,” Gartner says.
The research was published Oct. 8 in Nature Biotechnology. It was led by Ivy Liu, Walaa Kattan, Benjamin Mead, Conner Kummerlowe, and Alex K. Shalek, the director of the Institute for Medical Engineering and Sciences (IMES) and the Health Innovation Hub at MIT, as well as the J. W. Kieckhefer Professor in IMES and the Department of Chemistry. It was supported by the National Institutes of Health and the Bill and Melinda Gates Foundation.
A “crazy” way to increase scale
Technological advances over the past decade have revolutionized our understanding of the inner lives of individual cells, setting the stage for richer phenotypic screens. However, many challenges remain.
For one, biologically representative models like organoids and primary tissues are only available in limited quantities. The most informative tests, like single-cell RNA sequencing, are also expensive, time-consuming, and labor-intensive.
That’s why the team decided to test out the “bold, maybe even crazy idea” to mix everything together, says Liu, a PhD student in the MIT Computational and Systems Biology program. In other words, they chose to combine many perturbations — things like drugs, chemical molecules, or biological compounds made by cells — into one single concoction, and then try to decipher their individual effects afterward.
They began testing their workflow by making different combinations of 316 U.S. Food and Drug Administration-approved drugs. “It’s a high bar: basically, the worst-case scenario,” says Liu. “Since every drug is known to have a strong effect, the signals could have been impossible to disentangle.”
These random combinations ranged from three to 80 drugs per pool, each of which was applied to lab-grown cells. The team then tried to understand the effects of the individual drug using a linear computational model.
It was a success. When compared with traditional tests for each individual drug, the new method yielded comparable results, successfully finding the strongest drugs and their respective effects in each pool, at a fraction of the cost, samples, and effort.
Putting it into practice
To test the method’s applicability to address real-world health challenges, the team then approached two problems that were previously unimaginable with past phenotypic screening techniques.
The first test focused on pancreatic ductal adenocarcinoma (PDAC), one of the deadliest types of cancer. In PDAC, many types of signals come from the surrounding cells in the tumor's environment. These signals can influence how the tumor progresses and responds to treatments. So, the team wanted to identify the most important ones.
Using their new method to pool different signals in parallel, they found several surprise candidates. “We never could have predicted some of our hits,” says Shalek. These included two previously overlooked cytokines that actually could predict survival outcomes of patients with PDAC in public cancer data sets.
The second test looked at the effects of 90 drugs on adjusting the immune system’s function. These drugs were applied to fresh human blood cells, which contain a complex mix of different types of immune cells. Using their new method and single-cell RNA-sequencing, the team could not only test a large library of drugs, but also separate the drugs’ effects out for each type of cell. This enabled the team to understand how each drug might work in a more complex tissue, and then select the best one for the job.
“We might say there’s a defect in a T cell, so we’re going to add this drug, but we never think about, well, what does that drug do to all of the other cells in the tissue?” says Shalek. “We now have a way to gather this information, so that we can begin to pick drugs to maximize on-target effects and minimize side effects.”
Together, these experiments also showed Shalek the need to build better tools and datasets for creating hypotheses about potential treatments. “The complexity and lack of predictability for the responses we saw tells me that we likely are not finding the right, or most effective, drugs in many instances,” says Shalek.
Reducing barriers and improving lives
Although the current compression technique can identify the perturbations with the greatest effects, it’s still unable to perfectly resolve the effects of each one. Therefore, the team recommends that it act as a supplement to support additional screening. “Traditional tests that examine the top hits should follow,” Liu says.
Importantly, however, the new compression framework drastically reduces the number of input samples, costs, and labor required to execute a screen. With fewer barriers in play, it marks an exciting advance for understanding complex responses in different cells and building new models for precision medicine.
Shalek says, “This is really an incredible approach that opens up the kinds of things that we can do to find the right targets, or the right drugs, to use to improve lives for patients.”
No matter the outcome, the results of the 2024 United States presidential election are certain to have global impact. How are citizens and leaders in other parts of the world viewing this election? What’s at stake for their countries and regions?
This was the focus of “The 2024 US Presidential Election: The World is Watching,” a Starr Forum held earlier this month on the MIT campus.
The Starr Forum is a public event series hosted by MIT’s Center for International Studies (CIS), and focused on leading issues of global interest. The event was moderated by Evan Lieberman, director of CIS and the Total Professor of Political Science and Contemporary Africa.
Experts in African, Asian, European, and Latin American politics assembled to share ideas with one another and the audience.
Each offered informed commentary on their respective regions, situating their observations within several contexts including the countries’ style of government, residents’ perceptions of American democratic norms, and America’s stature in the eyes of those countries’ populations.
Perceptions of U.S. politics from across the globe
Katrina Burgess, professor of political economy at Tufts University and the director of the Henry J. Leir Institute of Migration and Human Security, sought to distinguish the multiple political identities of members of the Latin American diaspora in America and their perceptions of America’s relationship with their countries.
“American democracy is no longer perceived as a standard bearer,” Burgess said. “While members of these communities see advantages in aligning themselves with one of the presidential candidates because of positions on economic relations, immigration, and border security, others have deeply-held views on fossil fuels and increased access to sustainable energy solutions.”
Prerna Singh, Brown University’s Mahatma Gandhi Professor of Political Science and International Studies, spoke about India’s status as the world’s largest democracy and described a country moving away from democratic norms.
“Indian leaders don’t confer with the press,” she said. “Indian leaders don’t debate like Americans.”
The ethnically and linguistically diverse India, Singh noted, has elected several women to its highest government posts, while the United States has yet to elect one. She described a brand of “exclusionary nationalism” that threatened to move India away from democracy and toward something like authoritarian rule.
John Githongo, the Robert E. Wilhelm Fellow at CIS for 2024-25, shared his findings on African countries’ views of the 2024 election.
“America’s soft power infrastructure in Africa is crumbling,” said Githongo, a Kenyan native. “Chinese investment in Africa is up significantly and China is seen by many as an ideal political and economic partner.”
Youth-led protests in Kenya, Githongo noted, occurred in response to a failure of promised democratic reforms. He cautioned against a potential return to a pre-Cold War posture in Africa, noting that the Biden administration was the first in some time to attempt to reestablish economic and political ties with African countries.
Daniel Ziblatt, the Eaton Professor of Government at Harvard University and the director of the Minda de Gunzburg Center for European Studies, described shifting political winds in Europe that appear similar to increased right-wing extremism and a brand of populist agitation being observed in America.
“We see the rise of the radical, antidemocratic right in Europe and it looks like shifts we’ve observed in the U.S.,” he noted. “Trump supporters in Germany, Poland, and Hungary are increasingly vocal.”
Ziblatt acknowledged the divisions in the historical transatlantic relationship between Europe and America as symptoms of broader challenges. Russia’s invasion of Ukraine, energy supply issues, and national security apparatuses dependent on American support may continue to cause political ripples, he added.
Does America still have global influence?
Following each of their presentations, the guest speakers engaged in a conversation, taking questions from the audience. There was agreement among panelists that there’s less investment globally in the outcome of the U.S. election than may have been observed in past elections.
Singh noted that, from the perspective of the Indian media, India has bigger fish to fry.
Panelists diverged, however, when asked about the rise of political polarization and its connection with behaviors observed in American circles.
“This trend is global,” Burgess asserted. “There’s no causal relationship between American phenomena and other countries’ perceptions.”
“I think they’re learning from each other,” Ziblatt countered when asked about extremist elements in America and Europe. “There’s power in saying outrageous things.”
Githongo asserted a kind of “trickle-down” was at work in some African countries.
“Countries with right-leaning governments see those inclinations make their way to organizations like evangelical Christians,” he said. “Their influence mirrors the rise of right-wing ideology in other African countries and in America.”
Singh likened the continued splintering of American audiences to India’s caste system.
“I think where caste comes in is with the Indian diaspora,” she said. “Indian-American business and tech leaders tend to hail from high castes.” These leaders, she said, have outsized influence in their American communities and in India.
Left to right: Katrina Burgess of Tufts University; Daniel Ziblatt of Harvard University; Evan Lieberman of MIT; John Githongo, the CIS Robert E. Wilhelm Fellow at MIT; and Prerna Singh of Brown University participate in a recent Starr Forum.
Researchers at ETH Zurich have analysed down to the smallest detail the unusual arsenal of weapons that a predatory marine bacterium has at its disposal. Perhaps one day these weapons could also be put to use in medicine.
University cites careful planning, stewardship for solid financial position, endowment performance
Memorial Hall.
Photo by Grace DuVal
Staff Report
long read
Finance leaders note investments in key academic, community priorities
The University reported a budget surplus, along with robust endowment performance, and pointed to investments made throughout fiscal year 2024 in key mission-focused areas in its annual financial report released Thursday. Additionally, the report detailed philanthropic giving for the period, which continues to provide the resources to support increased financial aid and a range of academic and research priorities.
The Gazette spoke with Executive Vice President Meredith Weenick, chief financial officer and Vice President for Finance Ritu Kalra, and treasurer Timothy Barakett to learn more about how disciplined planning and sound financial management have positioned Harvard for progress in the years ahead. This interview was edited for clarity and length.
A year ago, the University had marked a full fiscal year return to post-pandemic normal operations, and we saw a corresponding operating margin that aligned with pre-pandemic performance. How would you describe the University’s financial position for fiscal year 2024, which ended with a surplus of $45.3 million?
WEENICK: Harvard continues to be in a solid financial position, grounded in thoughtful planning and careful stewardship across the University. This year’s surplus reflects the strategic decisions made by leadership across each of Harvard’s Schools. These surpluses are not merely financial metrics; they are vital sources of funds that allow us to strategically invest in educational and research initiatives aimed at tackling some of the most pressing global challenges.
Meredith Weenick.
Harvard file photo
“Our students, faculty, staff, and alumni leverage their knowledge and expertise to effect positive change through research, teaching, and community leadership at a global scale. The resources we steward support these efforts.”
Meredith Weenick
KALRA: Meredith makes an important point about the nature of Harvard’s operating result. It’s an aggregate reflection of the collective results across our Schools and units. These surpluses, plural — and in some cases deficits — are earned and managed locally. That local autonomy allows deans to direct resources to the areas they identify as their highest priorities.
This year, for the second year in a row, our operating expenses grew faster than our operating revenues — 9 percent versus 6 percent. That is not a long-run sustainable path. But the analysis begs an understanding of the nuance behind the numbers. Some of what looks like growing expenses are investments strategically intended to foster future growth. This year, those investments spanned several domains, including developing our technology infrastructure and AI capabilities and renewing our campus facilities to enable types of research that were unimaginable just a decade ago.
Of course, the pace of our recent spending underscores the need for prudence going forward. While it has been purposeful in the short term, it won’t be sustainable without a commensurate growth in revenue over the long term.
BARAKETT: This long-term perspective is essential. The University has investments it must make in the near future, including, for example, increased commitments to financial aid, which are vital to making Harvard and educational opportunities accessible. We must also continue to transform how we generate and distribute energy across the campus to meet our sustainability goals and commitments. At the same time, there are new opportunities we need to be poised to drive forward. For example, the transformative potentials of AI, quantum computing, and the life sciences will be made possible by the work of Harvard researchers across disciplines. In our planning for the years ahead, we must create the financial capacity to make room for these investments.
The academic year 2023-2024 was challenging for Harvard’s community, accompanied by frequent public criticism and scrutiny. Were there any financial impacts on the University?
KALRA: Throughout the year, our most immediate focus was to ensure our students had the resources needed to support their physical and emotional well-being. Senior leaders across the University and its Schools also invested enormous time and energy in cultivating a campus environment that fosters open inquiry and responsible civil discourse as a North Star for intellectual and personal growth. Each of those investments had a financial impact, though finances weren’t the drivers of those efforts.
The impact on philanthropy is less obvious. Across the higher education landscape, neither tuition revenues nor funding for research covers the full cost of an education. At Harvard, philanthropy, in the form of gifts for current use and the investment returns spawned by endowed gifts, is essential to make up the difference.
On both fronts, we are enormously grateful. In fiscal year 2024, current-use giving reached the second-highest level in Harvard’s history, and Harvard Management Company(HMC) generated a 9.6 percent return in the endowment portfolio. The future will be more complicated — both the level of giving and the level of returns may be difficult to sustain — but we remain grateful to our donors for their steadfast belief in Harvard’s academic mission. Their support is vital to everything we do.
WEENICK: I will also add that while we faced a challenging year on and off campus, Harvard never wavered from its commitment to excellence. The arenas in which we achieved that excellence span an astoundingly broad range. Dr. Claudia Goldin received the Nobel Prize in Economics last year, and Dr. Gary Ruvkun just won the Nobel Prize in Medicine. Ten of our students were named Rhodes Scholars last year, a record for Harvard and more than double any other school. And let’s not forget that our community excels at the highest levels outside of academics as well. Our student-athletes and alumni took home a record 13 medals at the Paris Olympics.
Timothy Barakett.
“While HMC’s performance is best measured over the long term, the endowment’s performance in fiscal year 2024 is certainly encouraging. It shows we are on the right track.”
Timothy Barakett
How will the most recent endowment return of 9.6 percent impact distributions in a way that benefits both current and future generations of students and scholars?
KALRA: The fiscal year 2024 endowment return will provide a welcome boost to distribution growth in the short term. However, as we caution every year, it’s critical to remember that the endowment is not a $53 billion checking account.
The endowment, in reality, is 14,600 different endowments, many of which belong to a specific School or are designated for particular areas of scholarship or programs. The distribution that supports those programs is meant to grow each year to keep pace with inflation, while the endowment itself is meant to last forever. That requires us to spend responsibly from the endowment, as we have to be able to support future generations of students and scholars even if we face periods of lower growth.
Harvard targets an 8 percent return. That accounts for an approximately 5 percent distribution to the University’s annual operations and allows the value of that distribution to grow each year by 3 percent to account for inflation. Under Narv Narvekar’s leadership, HMC’s return has been 9.3 percent over the past seven years, well in excess of the target.
WEENICK: As Narv shared in his letter in the financial report, there are a variety of factors that played into this year’s return, as is the case every year. Since HMC was founded, the endowment’s 11 percent annualized return has allowed distributions to grow dramatically. These funds support critical initiatives, from financial aid and faculty support to professorships and research.
BARAKETT: Harvard derives nearly 40 percent of its annual operating revenue from the endowment, so finding the right balance between return, risk, and volatility is critical. HMC’s performance was suboptimal before Narv’s appointment, and he inherited a portfolio that was overweighted in natural resources and real estate and underweighted in private equity and hedge funds.
Over the past seven years since his arrival, HMC has been restructured, and the portfolio has been substantially repositioned. Given the scale of the endowment, this took some time, and we are now well-positioned. While HMC’s performance is best measured over the long term, the endowment’s performance in fiscal year 2024 is certainly encouraging. It shows we are on the right track.
Ritu Kalra.
Harvard file photo
“Our reserves have been built over years through disciplined planning and sound financial management. We need to continue to build the capacity to invest in new programs and pedagogies in order to foster the academic excellence that is both Harvard’s hallmark and its aim.”
Ritu Kalra
A challenge of recent years has been rapidly rising interest rates. Yet bonds and notes payable increased from $6.2 billion in fiscal year 2023 to $7.1 billion in fiscal year 2024. Why did the University decide to issue debt at this time?
KALRA: It’s true that interest rates are elevated relative to the decade or so following the global financial crisis. However, that is not an interest rate environment to which we are likely to return, barring an unforeseen crisis. Yet we still need to invest in our buildings and maintain our campus.
There was a window last spring when credit spreads reached historically low levels, offsetting some of the impact of the rise in rates. The rating agencies reaffirmed Harvard’s AAA credit ratings, which reflects confidence in Harvard’s stability, and we took advantage of that market opportunity to borrow at an attractive all-in cost, right around 4 percent.
A portion of our bond issuance will go toward planned future capital projects, and a portion went toward refinancing outstanding debt that carried higher interest rates. Harvard’s overall financial condition remains very strong. We have ample levels of liquidity and ready access to the capital markets for future borrowings as needed.
WEENICK: As you can see from the construction activity while walking around our campuses, whether in Cambridge, Allston, or Longwood, we have a number of long-term capital projects underway. We also have plans for facility renovations and new construction, which are essential for the University’s infrastructure and growth. For example, we are making progress in Allston with the construction of the new home for the American Repertory Theater at the David E. and Stacey L. Goel Center for Creativity & Performance, along with the first University-wide conference center in the David Rubenstein Treehouse as part of the Enterprise Research Campus. This work also includes addressing other campus maintenance priorities and refreshed lab and classroom space to ensure the resilience and accessibility of our buildings.
One of the key themes found throughout this year’s financial report is advancing the public good. How is Harvard using its resources to support teaching, learning, and research priorities aimed at making a positive impact in the world?
WEENICK: Harvard’s commitment to academic excellence is the way we advance the public good. It’s at the core of everything we do. Our students, faculty, staff, and alumni leverage their knowledge and expertise to effect positive change through research, teaching, and community leadership at a global scale. The resources we steward support these efforts.
As a research university, Harvard is a powerful engine of innovation. In fiscal year 2024, our faculty were awarded $1 billion in external grants from government and private partners. On top of that, the University invests an additional $400 to $500 million a year to support research and early stage ideas. The discoveries made here have the potential to improve lives, transform industries, and create tremendous social and economic value. Harvard’s Office of Technology and Development plays a pivotal role in facilitating the translation of these discoveries into useful products and services that benefit society.
The University also serves as an epicenter of teaching, learning, and community service through initiatives like the Harvard Ed Portal, which connects the Boston and Cambridge communities to Harvard’s educational resources. Our partnership with our Harvard Medical School affiliates also provides access to some of the world’s best health and well-being resources.
Additionally, the learning that takes place on our campus also extends beyond the boundaries of the University. For example, in the Bloomberg Harvard City Leadership Initiative, our students include mayors from around the country, who return to their communities equipped to tackle challenges that improve their residents’ quality of life.
The University brings together community members worldwide at the start of each academic year for Harvard’s Global Day of Service. These civic engagement opportunities motivate students during their time at Harvard and inspire lifelong commitments to public service.
What is the projected financial outlook for next year and beyond?
WEENICK: While our financial position remains strong, we, along with all of our colleagues in higher education, must be conscious of the challenges in our current climate. As we have cautioned before, traditional revenues in higher education are constrained, and we must be cognizant of the pressures on tuition affordability.
As we move forward, it’s clear we need to prioritize activities that most significantly contribute to our mission, and we need to work efficiently so that more resources can go directly toward teaching and research.
KALRA: Projections are dangerous in a world of persistent uncertainty. Safeguarding the University’s financial resilience is vital in such a rapidly evolving landscape. Our reserves have been built over years through disciplined planning and sound financial management. We need to continue to build the capacity to invest in new programs and pedagogies in order to foster the academic excellence that is both Harvard’s hallmark and its aim.
BARAKETT: We are grateful to our community — faculty and other academic personnel, students, staff, alumni, and donors — for their dedication to the University’s mission. Together, we have ensured that Harvard remains positioned for progress and continues to deliver on its world-changing mission.
Some give up without guilt while others insist going cover to cover. Harvard readers share their criteria.
Liz Mineo
Harvard Staff Writer
6 min read
On the matter of whether it’s acceptable to stop reading a book before its end, there are two schools of thought: one that says we must finish what we started, and one that declares that life is too short for books we don’t enjoy.
The Gazette asked librarians, a classics professor, a literature scholar, and a lecturer in English for their views on a subject that triggers fiery debates among book lovers. Although all seven readers interviewed for this story fall on the “life is too short” side of the debate, they differ on when it’s OK to give up without guilt.
Maria Tatar, John L. Loeb Research Professor of Germanic Languages and Literatures and of Folklore and Mythology, Emerita, said reading a book is a magical confluence of several factors that create a fulfilling experience, and when the delight is not there it can be shattering, if liberating.
“There’s a certain romance to reading, hence the inevitable heartache when you break up with a book,” wrote Tatar in an email. “I need both substance and sorcery, captivating content and magic on the page.”
When that magic is absent, said Tatar, readers should act accordingly, whether they’re 50 or 100 pages in. The reader’s clock is what matters, she said.
“Now, when I’m not under the spell of a book by page 50 or so, I put it aside,” Tatar said. “And sometimes, halfway through a volume, I realize that I get it and can stop reading. That happens frequently with biographies, for example.”
Reed Lowrie, head of research services at the Faculty of Arts and Sciences Libraries, a fan of crime fiction, has no problem abandoning books when authors fall back on cliches or uninspired tropes.
“The danger of sticking with a book in that genre to the end is that you can be at the mercy of a horrible plot twist that makes reading the preceding hundreds of pages seem like a waste of time (‘Her missing husband was living in a cave near her house the whole time and she had several interactions with him without realizing he was her husband’),” said Lowrie. “You should keep reading a book if you’re enjoying and/or learning from it, but if neither of those things are true, put it down and find something else to read.”
Alessandra Seiter, community engagement librarian at Harvard Kennedy School, urges readers to follow their gut. “If you feel like you’re not being fulfilled or not being engaged, or it’s not how you want to spend your time, I give you full professional permission to put the book down.”
Whether the reader is put off by the author’s writing style, a weak plot, or the pace, it is OK to drop the book, said Maya Bergamasco, faculty research and scholarly support librarian at Harvard Law School Library.
“If the book is not working for me, I stop reading it,” said Bergamasco. “I’m kind of ruthless. There are so many books in the world and so little time to read them all. If it feels like a chore, why would you put yourself through that?”
“You should keep reading a book if you’re enjoying and/or learning from it, but if neither of those things are true, put it down and find something else to read.”
Reed Lowrie
Worry less about reading from cover to cover and focus instead on the experience, said Sophia J. Mao, lecturer on English at the Department of English.
“Reading, especially today, is never a solitary activity but comes alive in the classroom, on BookTok, at events in public libraries, bookstores, and community spaces,” said Mao. “As a literary scholar and a teacher, I may guide others toward what makes a specific book notable, but I also want to know what other works people are drawn to and why. I’ll never be tired of hearing from others what they find beautiful and moving. It’s what makes reading a pleasure and a challenge to my own perspective on whether a book is ‘worth’ it.”
When books are picked up on a whim, reading a few pages should suffice, said Mary Frances Angelini, research librarian for the Extension School. “When reading for pleasure, I tend to give the book about 10 percent of the pages to hook me. If it doesn’t work for me, then I move on to the next book.”
Richard Thomas, George Martin Lane Professor of the Classics, tries to be efficient with his reading and reads reviews to choose nonfiction books, a genre he favors.
“It’s important to approach a book with some sort of knowledge about it,” said Thomas. “I tend to read a lot of reviews to make sure that the books are going to be worth my while. With recent books that have just come out, there’s obviously a lot of variation in quality, so you’re more likely to not finish your book, and that can be frustrating and alienating.”
Book lovers should not harbor guilt or agony over parting ways with a book although those reactions are plausible, said Thomas.
“Guilt and self-criticism are a natural response,” said Thomas. “I’ve never found guilt a very useful quality, so I don’t know if one should feel guilty for not finishing reading a book.”
Tatar shares that sentiment. Instead of remorse, she said, readers should focus on finding books they delight in and allow themselves to feel sad when a beloved book ends.
“Guilt?” said Tatar. “None at all, unless you are reading the book with your book club. Then you feel like a delinquent. Or, of course, if you’re reading it for a class. What’s harder for me, and what sometimes fills me with grief is finishing a book, exiting a world in which I was once immersed, living and breathing with the characters.”
Get the best of the Gazette delivered to your inbox
By subscribing to this newsletter you’re agreeing to our privacy policy
A quasar is the extremely bright core of a galaxy that hosts an active supermassive black hole at its center. As the black hole draws in surrounding gas and dust, it blasts out an enormous amount of energy, making quasars some of the brightest objects in the universe. Quasars have been observed as early as a few hundred million years after the Big Bang, and it’s been a mystery as to how these objects could have grown so bright and massive in such a short amount of cosmic time.
Scientists have proposed that the earliest quasars sprang from overly dense regions of primordial matter, which would also have produced many smaller galaxies in the quasars’ environment. But in a new MIT-led study, astronomers observed some ancient quasars that appear to be surprisingly alone in the early universe.
The astronomers used NASA’s James Webb Space Telescope (JWST) to peer back in time, more than 13 billion years, to study the cosmic surroundings of five known ancient quasars. They found a surprising variety in their neighborhoods, or “quasar fields.” While some quasars reside in very crowded fields with more than 50 neighboring galaxies, as all models predict, the remaining quasars appear to drift in voids, with only a few stray galaxies in their vicinity.
These lonely quasars are challenging physicists’ understanding of how such luminous objects could have formed so early on in the universe, without a significant source of surrounding matter to fuel their black hole growth.
“Contrary to previous belief, we find on average, these quasars are not necessarily in those highest-density regions of the early universe. Some of them seem to be sitting in the middle of nowhere,” says Anna-Christina Eilers, assistant professor of physics at MIT. “It’s difficult to explain how these quasars could have grown so big if they appear to have nothing to feed from.”
There is a possibility that these quasars may not be as solitary as they appear, but are instead surrounded by galaxies that are heavily shrouded in dust and therefore hidden from view. Eilers and her colleagues hope to tune their observations to try and see through any such cosmic dust, in order to understand how quasars grew so big, so fast, in the early universe.
Eilers and her colleagues report their findings in a paper appearing today in the Astrophysical Journal. The MIT co-authors include postdocs Rohan Naidu and Minghao Yue; Robert Simcoe, the Francis Friedman Professor of Physics and director of MIT’s Kavli Institute for Astrophysics and Space Research; and collaborators from institutions including Leiden University, the University of California at Santa Barbara, ETH Zurich, and elsewhere.
Galactic neighbors
The five newly observed quasars are among the oldest quasars observed to date. More than 13 billion years old, the objects are thought to have formed between 600 to 700 million years after the Big Bang. The supermassive black holes powering the quasars are a billion times more massive than the sun, and more than a trillion times brighter. Due to their extreme luminosity, the light from each quasar is able to travel over the age of the universe, far enough to reach JWST’s highly sensitive detectors today.
“It’s just phenomenal that we now have a telescope that can capture light from 13 billion years ago in so much detail,” Eilers says. “For the first time, JWST enabled us to look at the environment of these quasars, where they grew up, and what their neighborhood was like.”
The team analyzed images of the five ancient quasars taken by JWST between August 2022 and June 2023. The observations of each quasar comprised multiple “mosaic” images, or partial views of the quasar’s field, which the team effectively stitched together to produce a complete picture of each quasar’s surrounding neighborhood.
The telescope also took measurements of light in multiple wavelengths across each quasar’s field, which the team then processed to determine whether a given object in the field was light from a neighboring galaxy, and how far a galaxy is from the much more luminous central quasar.
“We found that the only difference between these five quasars is that their environments look so different,” Eilers says. “For instance, one quasar has almost 50 galaxies around it, while another has just two. And both quasars are within the same size, volume, brightness, and time of the universe. That was really surprising to see.”
Growth spurts
The disparity in quasar fields introduces a kink in the standard picture of black hole growth and galaxy formation. According to physicists’ best understanding of how the first objects in the universe emerged, a cosmic web of dark matter should have set the course. Dark matter is an as-yet unknown form of matter that has no other interactions with its surroundings other than through gravity.
Shortly after the Big Bang, the early universe is thought to have formed filaments of dark matter that acted as a sort of gravitational road, attracting gas and dust along its tendrils. In overly dense regions of this web, matter would have accumulated to form more massive objects. And the brightest, most massive early objects, such as quasars, would have formed in the web’s highest-density regions, which would have also churned out many more, smaller galaxies.
“The cosmic web of dark matter is a solid prediction of our cosmological model of the Universe, and it can be described in detail using numerical simulations,” says co-author Elia Pizzati, a graduate student at Leiden University. “By comparing our observations to these simulations, we can determine where in the cosmic web quasars are located.”
Scientists estimate that quasars would have had to grow continuously with very high accretion rates in order to reach the extreme mass and luminosities at the times that astronomers have observed them, fewer than 1 billion years after the Big Bang.
“The main question we’re trying to answer is, how do these billion-solar-mass black holes form at a time when the universe is still really, really young? It’s still in its infancy,” Eilers says.
The team’s findings may raise more questions than answers. The “lonely” quasars appear to live in relatively empty regions of space. If physicists’ cosmological models are correct, these barren regions signify very little dark matter, or starting material for brewing up stars and galaxies. How, then, did extremely bright and massive quasars come to be?
“Our results show that there’s still a significant piece of the puzzle missing of how these supermassive black holes grow,” Eilers says. “If there’s not enough material around for some quasars to be able to grow continuously, that means there must be some other way that they can grow, that we have yet to figure out.”
This research was supported, in part, by the European Research Council.
This image, taken by NASA’s James Webb Space Telescope, shows an ancient quasar (circled in red) with fewer than expected neighboring galaxies (bright blobs), challenging physicists’ understanding of how the first quasars and supermassive black holes formed.
While living matter can advance technology and render human activities more efficient and eco-friendly, the way in which we currently fabricate materials containing living cells is far from sustainable. Miriam Filippi calls us to rethink our biofabrication practices.
How many images do we see in a day? Scrolling through social media platforms or news sites on our way to school or work, we may have already seen a few hundred images, from memes and photographs to advertisements, posters and videos or animated content.
These visual imageries not only affect our mood, how we think and perceive or relate to other content, but have the power to shape narratives and societies and cultures. In short, visual media impact and shape our lives in ways we may not immediately recognise.
Students examined these illuminating insights during a workshop on 9 October that introduced the basic concepts of visual cultures and curatorial practices. The workshop was an introduction to the Minor in Visual Cultures that will soon be available to all undergraduates at NUS.
NUS first to offer Visual Cultures as a minor across the STEM and humanities fields
When launched in January 2025, NUS will be the first university in Singapore to offer students the option of pursuing a Minor in Visual Cultures. The programme, jointly offered by the Department of Communications and New Media (CNM) at the NUS Faculty of Arts and Social Sciences and NUS Museum, will equip students with skills to understand, critique and ethically use visually-driven media and artefacts in and across the STEM and humanities disciplines of design, digital technology, communications and new media, architecture, visual arts, aesthetics, and culture.
This is the first time NUS Museum, a university museum that houses Asian art and cultural collections and facilitates intellectual and culture life at the university, has co-developed and will co-teach a Capstone course in an undergraduate programme.
“Visual culture is everywhere. The visual tells us a lot about how we live, how we think. Visuality influences our choices, our political views, and our ways of being,” said Programme Director of the Minor in Visual Cultures, Associate Professor Lilian Chee from the Department of Architecture (DoA) at the College of Design and Engineering at NUS (CDE). Assoc Prof Chee is also Academic Director of the NUS Museum and holds a joint appointment at CNM. She adds, “In such a visually saturated world, visual literacy is particularly important for understanding the production, consumption and interpretation of information. It is a hugely valuable skill for any field of study, and in any future professional role.”
Assoc Prof Chee emphasised that being multi- and interdisciplinary in nature, the Minor in Visual Cultures strongly aligns with NUS’ goal of developing critical thought leaders with multimodal skills to think, act and communicate effectively across disciplines.
Registration for the Minor in Visual Cultures will open in mid-December 2024. To be certified with the Minor, students enrolled in the programme must take and pass two compulsory core courses and three electives.
This includes the introductory and compulsory course ‘Reading Visual Images’ offered by DoA which introduces students to ways of interpreting and discussing works of art, specifically through paintings and sculpture.
Electives-wise, students can choose from 27 courses drawn from 11 departments and divisions in the Humanities or Sciences such as ‘AI for Design’, “Cartography and Geovisualisation’, ‘Modern Art: A Critical Introduction’, ‘Social Psychology of New Media’ and ‘Modern Optics’.
Ms Siddharta Perez, Museum Curatorial Lead at NUS Museum added, “Collaborating with CNM on the Minor in Visual Cultures allows the Museum to impart their industry knowledge and insights to students through hands-on exhibition-making or multidisciplinary research centred on visual cultures. It also draws on the Museum’s rich collections of visual resources, objects and archives on Singapore and the region.”
A “Visual Cultures” Capstone course
The programme culminates in a compulsory “Visual Cultures” Capstone course that examines the significance of the visual and the politics of visuality across the fields of heritage, environmental humanities, philosophy, spatial practices, design, architecture, visual art and performance.
The course will first bring together knowledge gleaned from the theoretical electives. A consolidation of different types of visual categories (such as objects and paintings, buildings, maps, social media, photographs and AI-generated images), it gathers key visual theories from different disciplines (film studies, architecture, communications and media, geography, history, and philosophy among them) to discuss, examine, and reflect on their relationships.
In the second half of the course, students will be required to develop projects to demonstrate critical visual cultural thinking and skills. These projects may take on more experimental pathways, involve novel interventions in existing exhibitions, develop a series of public programmes or marketing campaigns or work on expanding or enhancing the Museum’s current collections.
Such project-based and problem-based learning will provide students opportunities to launch a media campaign, pitch an idea for funding, strategise the gathering of resources and their allocation, learn how to use social media ethically, and how to market content through visual media.
Dr Baey Shi Chen, a lecturer with the Department of CNM and Co-convener of the Minor in Visual Cultures shared, “The Capstone course offers a broad-based education, focusing primarily on how the visual both gathers and cuts across a wide range of disciplines. Students will strategise how to collect and curate visual information, and learn how audiences perceive the work. These are transferable skills that can translate across a broad range of professions: from the arts to hospitality, banking, education, media and design, healthcare, science and law.”
Neo Jie Ning, a third-year student with the NUS Yong Siew Toh Conservatory of Music, who attended the workshop to learn more about the Minor in Visual Cultures shared that she was particularly intrigued by the diverse range of courses offered by the Minor. “As someone passionate about photography and visual arts, I believe taking these courses will not only deepen my understanding of visual arts but also offer fresh perspectives that complement my current major in Music and Production.”
Students who are keen to learn more about the field can sign up for upcoming workshops that will take place on 23 October and 6 November here.
New scientific milestones in the mission to de-extinct the thylacine are advancing University of Melbourne research and progressing potential solutions to the broader extinction crisis.
Heim shares the award with Hans Kamp, a professor of formal logics and philosophy of language at the University of Stuttgart in Germany. Heim and Kamp are being recognized for their independent work on the “conception and early development of dynamic semantics for natural language.”
The Schock Prize in Logic and Philosophy, sometimes referred to as the Nobel Prize of philosophy, is awarded every three years by the Schock Foundation to distinguished international recipients proposed by the Royal Swedish Academy of Sciences. A prize ceremony and symposium will be held at the Royal Academy of Fine Arts in Stockholm Nov. 11-12. MIT will host a separate event on campus celebrating Heim’s achievement on Dec. 7.
A press release from the Royal Swedish Academy of Sciences explains more about the research for which Heim and Kamp were recognized:
“Natural languages are highly context-dependent — how a sentence is interpreted often depends on the situation, but also on what has been uttered before. In one type of case, a pronoun depends on an earlier phrase in a separate clause. In the mid-1970s, some constructions of this type posed a hard problem for formal semantic theory.
“Around 1980, Hans Kamp and Irene Heim each separately developed similar solutions to this problem. Their theories brought far-reaching changes in the field. Both introduced a new level of representation between the linguistic expression and its worldly interpretation and, in both, this level has a new type of linguistic meaning. Instead of the traditional idea that a clause describes a worldly condition, meaning at this level consists in the way it contributes to updating information. Based on these fundamentally new ideas, the theories provide adequate interpretations of the problematic constructions.”
This is the first time the prize has been awarded for work done in linguistics. The work has had a transformative effect on three major subfields of linguistics: the study of linguistic mental representation (syntax), the study of their logical properties (semantics), and the study of the conditions on the use of linguistic expressions in conversation (pragmatics). Heim has published dozens of texts on semantics and syntax of language.
“I am struck again and again by how our field has progressed in the 50 years since I first entered it and the 40 years since my co-awardee and I contributed the work which won the award,” Heim said. “Those old contributions now look kind of simple-minded, in some spots even confused. But — like other influential ideas in this half-century of linguistics and philosophy of language — they have been influential not just because many people ran with them, but more so because many people picked them apart and explored ever more sophisticated and satisfying alternatives to them.”
Heim, a recognized leader in the fields of syntax and semantics, was born in Germany in 1954. She studied at the University of Konstanz and the Ludwig Maximilian University of Munich, where she earned an MA in philosophy while minoring in linguistics and mathematics. She later earned a PhD in linguistics at the University of Massachusetts at Amherst. She previously taught at the University of Texas at Austin and the University of California Los Angeles before joining MIT’s faculty in 1989.
“I am proud to think of myself as Irene’s student,” says Danny Fox, linguistics section head and the Anshen-Chomsky Professor of Language and Thought. “Irene’s work has served as the foundation of so many areas of our field, and she is rightfully famous for it. But her influence goes even deeper than that. She has taught generations of researchers, primarily by example, how to think anew about entrenched ideas (including her own contributions), how much there is to gain from careful analysis of theoretical proposals, and at the same time, how not to entirely neglect our ambitious aspirations to move beyond this careful work and think about when it might be appropriate to take substantive risks.”
Irene Heim, a recognized leader in the fields of syntax and semantics, is being recognized for her independent work on the “conception and early development of dynamic semantics for natural language.”
About one-fifth of all college students identify as having a disability, a figure that has grown in recent decades. At Penn, students form advocacy clubs, work with the Weingarten Center, and study disability.
In the current AI zeitgeist, sequence models have skyrocketed in popularity for their ability to analyze data and predict what to do next. For instance, you’ve likely used next-token prediction models like ChatGPT, which anticipate each word (token) in a sequence to form answers to users’ queries. There are also full-sequence diffusion models like Sora, which convert words into dazzling, realistic visuals by successively “denoising” an entire video sequence.
Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have proposed a simple change to the diffusion training scheme that makes this sequence denoising considerably more flexible.
When applied to fields like computer vision and robotics, the next-token and full-sequence diffusion models have capability trade-offs. Next-token models can spit out sequences that vary in length. However, they make these generations while being unaware of desirable states in the far future — such as steering its sequence generation toward a certain goal 10 tokens away — and thus require additional mechanisms for long-horizon (long-term) planning. Diffusion models can perform such future-conditioned sampling, but lack the ability of next-token models to generate variable-length sequences.
Researchers from CSAIL want to combine the strengths of both models, so they created a sequence model training technique called “Diffusion Forcing.” The name comes from “Teacher Forcing,” the conventional training scheme that breaks down full sequence generation into the smaller, easier steps of next-token generation (much like a good teacher simplifying a complex concept).
Diffusion Forcing found common ground between diffusion models and teacher forcing: They both use training schemes that involve predicting masked (noisy) tokens from unmasked ones. In the case of diffusion models, they gradually add noise to data, which can be viewed as fractional masking. The MIT researchers’ Diffusion Forcing method trains neural networks to cleanse a collection of tokens, removing different amounts of noise within each one while simultaneously predicting the next few tokens. The result: a flexible, reliable sequence model that resulted in higher-quality artificial videos and more precise decision-making for robots and AI agents.
By sorting through noisy data and reliably predicting the next steps in a task, Diffusion Forcing can aid a robot in ignoring visual distractions to complete manipulation tasks. It can also generate stable and consistent video sequences and even guide an AI agent through digital mazes. This method could potentially enable household and factory robots to generalize to new tasks and improve AI-generated entertainment.
“Sequence models aim to condition on the known past and predict the unknown future, a type of binary masking. However, masking doesn’t need to be binary,” says lead author, MIT electrical engineering and computer science (EECS) PhD student, and CSAIL member Boyuan Chen. “With Diffusion Forcing, we add different levels of noise to each token, effectively serving as a type of fractional masking. At test time, our system can “unmask” a collection of tokens and diffuse a sequence in the near future at a lower noise level. It knows what to trust within its data to overcome out-of-distribution inputs.”
In several experiments, Diffusion Forcing thrived at ignoring misleading data to execute tasks while anticipating future actions.
When implemented into a robotic arm, for example, it helped swap two toy fruits across three circular mats, a minimal example of a family of long-horizon tasks that require memories. The researchers trained the robot by controlling it from a distance (or teleoperating it) in virtual reality. The robot is trained to mimic the user’s movements from its camera. Despite starting from random positions and seeing distractions like a shopping bag blocking the markers, it placed the objects into its target spots.
To generate videos, they trained Diffusion Forcing on “Minecraft” game play and colorful digital environments created within Google’s DeepMind Lab Simulator. When given a single frame of footage, the method produced more stable, higher-resolution videos than comparable baselines like a Sora-like full-sequence diffusion model and ChatGPT-like next-token models. These approaches created videos that appeared inconsistent, with the latter sometimes failing to generate working video past just 72 frames.
Diffusion Forcing not only generates fancy videos, but can also serve as a motion planner that steers toward desired outcomes or rewards. Thanks to its flexibility, Diffusion Forcing can uniquely generate plans with varying horizon, perform tree search, and incorporate the intuition that the distant future is more uncertain than the near future. In the task of solving a 2D maze, Diffusion Forcing outperformed six baselines by generating faster plans leading to the goal location, indicating that it could be an effective planner for robots in the future.
Across each demo, Diffusion Forcing acted as a full sequence model, a next-token prediction model, or both. According to Chen, this versatile approach could potentially serve as a powerful backbone for a “world model,” an AI system that can simulate the dynamics of the world by training on billions of internet videos. This would allow robots to perform novel tasks by imagining what they need to do based on their surroundings. For example, if you asked a robot to open a door without being trained on how to do it, the model could produce a video that’ll show the machine how to do it.
The team is currently looking to scale up their method to larger datasets and the latest transformer models to improve performance. They intend to broaden their work to build a ChatGPT-like robot brain that helps robots perform tasks in new environments without human demonstration.
“With Diffusion Forcing, we are taking a step to bringing video generation and robotics closer together,” says senior author Vincent Sitzmann, MIT assistant professor and member of CSAIL, where he leads the Scene Representation group. “In the end, we hope that we can use all the knowledge stored in videos on the internet to enable robots to help in everyday life. Many more exciting research challenges remain, like how robots can learn to imitate humans by watching them even when their own bodies are so different from our own!”
Chen and Sitzmann wrote the paper alongside recent MIT visiting researcher Diego Martí Monsó, and CSAIL affiliates: Yilun Du, a EECS graduate student; Max Simchowitz, former postdoc and incoming Carnegie Mellon University assistant professor; and Russ Tedrake, the Toyota Professor of EECS, Aeronautics and Astronautics, and Mechanical Engineering at MIT, vice president of robotics research at the Toyota Research Institute, and CSAIL member. Their work was supported, in part, by the U.S. National Science Foundation, the Singapore Defence Science and Technology Agency, Intelligence Advanced Research Projects Activity via the U.S. Department of the Interior, and the Amazon Science Hub. They will present their research at NeurIPS in December.
The “Diffusion Forcing” method can sort through noisy data and reliably predict the next steps in a task, helping a robot complete manipulation tasks, for example. In one experiment, it helped a robotic arm rearrange toy fruits into target spots on circular mats despite starting from random positions and visual distractions.
In the current AI zeitgeist, sequence models have skyrocketed in popularity for their ability to analyze data and predict what to do next. For instance, you’ve likely used next-token prediction models like ChatGPT, which anticipate each word (token) in a sequence to form answers to users’ queries. There are also full-sequence diffusion models like Sora, which convert words into dazzling, realistic visuals by successively “denoising” an entire video sequence.
Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have proposed a simple change to the diffusion training scheme that makes this sequence denoising considerably more flexible.
When applied to fields like computer vision and robotics, the next-token and full-sequence diffusion models have capability trade-offs. Next-token models can spit out sequences that vary in length. However, they make these generations while being unaware of desirable states in the far future — such as steering its sequence generation toward a certain goal 10 tokens away — and thus require additional mechanisms for long-horizon (long-term) planning. Diffusion models can perform such future-conditioned sampling, but lack the ability of next-token models to generate variable-length sequences.
Researchers from CSAIL want to combine the strengths of both models, so they created a sequence model training technique called “Diffusion Forcing.” The name comes from “Teacher Forcing,” the conventional training scheme that breaks down full sequence generation into the smaller, easier steps of next-token generation (much like a good teacher simplifying a complex concept).
Diffusion Forcing found common ground between diffusion models and teacher forcing: They both use training schemes that involve predicting masked (noisy) tokens from unmasked ones. In the case of diffusion models, they gradually add noise to data, which can be viewed as fractional masking. The MIT researchers’ Diffusion Forcing method trains neural networks to cleanse a collection of tokens, removing different amounts of noise within each one while simultaneously predicting the next few tokens. The result: a flexible, reliable sequence model that resulted in higher-quality artificial videos and more precise decision-making for robots and AI agents.
By sorting through noisy data and reliably predicting the next steps in a task, Diffusion Forcing can aid a robot in ignoring visual distractions to complete manipulation tasks. It can also generate stable and consistent video sequences and even guide an AI agent through digital mazes. This method could potentially enable household and factory robots to generalize to new tasks and improve AI-generated entertainment.
“Sequence models aim to condition on the known past and predict the unknown future, a type of binary masking. However, masking doesn’t need to be binary,” says lead author, MIT electrical engineering and computer science (EECS) PhD student, and CSAIL member Boyuan Chen. “With Diffusion Forcing, we add different levels of noise to each token, effectively serving as a type of fractional masking. At test time, our system can “unmask” a collection of tokens and diffuse a sequence in the near future at a lower noise level. It knows what to trust within its data to overcome out-of-distribution inputs.”
In several experiments, Diffusion Forcing thrived at ignoring misleading data to execute tasks while anticipating future actions.
When implemented into a robotic arm, for example, it helped swap two toy fruits across three circular mats, a minimal example of a family of long-horizon tasks that require memories. The researchers trained the robot by controlling it from a distance (or teleoperating it) in virtual reality. The robot is trained to mimic the user’s movements from its camera. Despite starting from random positions and seeing distractions like a shopping bag blocking the markers, it placed the objects into its target spots.
To generate videos, they trained Diffusion Forcing on “Minecraft” game play and colorful digital environments created within Google’s DeepMind Lab Simulator. When given a single frame of footage, the method produced more stable, higher-resolution videos than comparable baselines like a Sora-like full-sequence diffusion model and ChatGPT-like next-token models. These approaches created videos that appeared inconsistent, with the latter sometimes failing to generate working video past just 72 frames.
Diffusion Forcing not only generates fancy videos, but can also serve as a motion planner that steers toward desired outcomes or rewards. Thanks to its flexibility, Diffusion Forcing can uniquely generate plans with varying horizon, perform tree search, and incorporate the intuition that the distant future is more uncertain than the near future. In the task of solving a 2D maze, Diffusion Forcing outperformed six baselines by generating faster plans leading to the goal location, indicating that it could be an effective planner for robots in the future.
Across each demo, Diffusion Forcing acted as a full sequence model, a next-token prediction model, or both. According to Chen, this versatile approach could potentially serve as a powerful backbone for a “world model,” an AI system that can simulate the dynamics of the world by training on billions of internet videos. This would allow robots to perform novel tasks by imagining what they need to do based on their surroundings. For example, if you asked a robot to open a door without being trained on how to do it, the model could produce a video that’ll show the machine how to do it.
The team is currently looking to scale up their method to larger datasets and the latest transformer models to improve performance. They intend to broaden their work to build a ChatGPT-like robot brain that helps robots perform tasks in new environments without human demonstration.
“With Diffusion Forcing, we are taking a step to bringing video generation and robotics closer together,” says senior author Vincent Sitzmann, MIT assistant professor and member of CSAIL, where he leads the Scene Representation group. “In the end, we hope that we can use all the knowledge stored in videos on the internet to enable robots to help in everyday life. Many more exciting research challenges remain, like how robots can learn to imitate humans by watching them even when their own bodies are so different from our own!”
Chen and Sitzmann wrote the paper alongside recent MIT visiting researcher Diego Martí Monsó, and CSAIL affiliates: Yilun Du, a EECS graduate student; Max Simchowitz, former postdoc and incoming Carnegie Mellon University assistant professor; and Russ Tedrake, the Toyota Professor of EECS, Aeronautics and Astronautics, and Mechanical Engineering at MIT, vice president of robotics research at the Toyota Research Institute, and CSAIL member. Their work was supported, in part, by the U.S. National Science Foundation, the Singapore Defence Science and Technology Agency, Intelligence Advanced Research Projects Activity via the U.S. Department of the Interior, and the Amazon Science Hub. They will present their research at NeurIPS in December.
The “Diffusion Forcing” method can sort through noisy data and reliably predict the next steps in a task, helping a robot complete manipulation tasks, for example. In one experiment, it helped a robotic arm rearrange toy fruits into target spots on circular mats despite starting from random positions and visual distractions.
A series of random questions answered by Harvard experts.
Leslie Valiant, the T. Jefferson Coolidge Professor of Computer Science and Applied Mathematics at the John H. Paulson School of Engineering and Applied Sciences, has spent decades studying human cognition. His books include “Circuits of the Mind” and, most recently, “The Importance of Being Educable.”
Notions like smartness and intelligence are almost like nonsense. We think we know what they mean but we can’t define them with precision. Even psychologists can’t agree on one definition. And intelligence tests don’t tell one very much — they are usually justified in terms of correlations with other things.
How do you recognize whether someone is intelligent? There is not one answer; there are many, and they can be inconsistent. Some, like Howard Gardner, have emphasized that there are many kinds of intelligence. I think we’ve reached the expiration date for the usefulness of the term “intelligence” both for humans and machines. We should be able to do better.
I’m a computer scientist and I take a computational approach to understanding what the mind does. In computer science, the main questions, theoretically speaking, have been: What is easy to compute? and What is hard to compute? Some decades ago, I decided that the secrets of human cognition must also be hidden in this problem — that some things are hard to compute for the brain and some things are easier. The main advantage that computer science offers is that one can express capabilities that are more complicated than ones reasonably implied by conventional phrases, and, at the same time, evaluate their feasibility for the brain.
I started working on a computational viewpoint on cognition 40 years ago. The fundamental challenge I set myself was to find a useful definition of learning. In “The Importance of Being Educable,” I define the concept of “educability.” My view is that it wasn’t “intelligence” that allowed humans to create civilizations, but educability, which involves three aspects.
The first is learning from experience. The second is being able to chain together the things you’ve learned; it’s a kind of low-level reasoning capability that even the simplest animals have because it’s so essential to life. The third is being able to incorporate knowledge acquired from instruction. This last one is very important for humans because this is how culture spreads and science progresses.
“Computers will not take over the world just because they want to. This will happen only if we allow it to happen.”
Educability incorporates both the ability to generate new knowledge by learning from experience and also the ability to transfer that knowledge directly to others. There isn’t the time or the need for everyone to gain the same experiences, such as repeating difficult scientific experiments.
I’d say that machines can be also made educable, and ultimately, we won’t be able to claim that we’re fundamentally different from machines. Current AI systems are not designed to be educable in the sense I define, but machines will likely become more and more capable in that direction. I don’t see AI as an existential threat; it’s just another powerful technology. Obviously, in bad hands, it can be misused, just like chemistry or nuclear physics. Computers will not take over the world just because they want to. This will happen only if we allow it to happen.
There is a downside of being educable. Educability gives us very powerful ways of acquiring new information — we can soak it all up. But we don’t have comparable abilities to check whether the information we get is true or not. We are not well-equipped for evaluating knowledge, theories, or facts. If someone tells us something, if we believe it, we will incorporate it into our knowledge. This can be dangerous. The only cure is to educate people about what propaganda has done over the centuries and make them aware of this human weakness. To inoculate ourselves against disinformation we need to acknowledge our basic weakness.
Xie, a materials engineer, won a 2024 Packard Fellowship for creating atomically thin materials. “Thinking and inventing down to an atomic level like Saien is doing, most spectacularly I should add, is the future,” said James Sturm, ECE department chair.
The findings could lead to new treatments targeting a particular protein to better manage inflammation in patients who don’t respond well to existing therapies.
Novelist and boxer Laura van den Berg says the two practices have a lot in common
Eileen O’Grady
Harvard Staff Writer
5 min read
Laura van den Berg circled her partner on the mat at a Central Square boxing gym on a recent Tuesday evening, hands raised protectively in front of her face, delivering quick, precise jabs into the training mitts. Others in the class paused for water between exercise intervals, but van den Berg remained in place, bouncing on her toes, keeping loose for the next round.
This past summer van den Berg, senior lecturer in English, embarked on two parallel journeys: publishing her sixth novel, “State of Paradise,” and training for a boxing match that will take place this weekend in Waltham. She believes the practices foster similar useful qualities.
“Writing a novel and training for a fight both require an immense trust in process,” van den Berg said. “There are going to be times when you are really tired, overwhelmed, or defeated, and you have to trust in your program. Whether your program for a writer is writing five pages a day, or your program as a fighter is showing up to your daily training session, you have to trust in the power of cumulative labor over time.”
Van den Berg started writing “State of Paradise,” which was published in July, during the pandemic when she and her husband, Paul Yoon, also a fiction writer and senior lecturer in English, were living in her native state of Florida. It began as a daily practice of writing meditations on aspects of her surroundings: weather, landscapes, or family life.
“I did this for about six months with no expectation that it would turn into a book. Then a strange thing happened,” van den Berg said. “I realized I was writing these meditations in a voice that was like mine but also not mine. And that was the voice of the protagonist stepping forward.”
The novel follows a ghostwriter for a famous thriller author, as her everyday life in humid small-town Florida is disrupted by strange events — extreme weather, sinkholes, missing people, a cult in her living room, and a disorienting virtual reality device — that challenge her perception of reality.
While in Florida writing the novel, van den Berg, who began boxing for fitness and mental health in 2018, started getting more involved in competitive boxing. After her first USA Boxing-sanctioned fight in 2021 (which she lost by decision), she knew she wanted to try again but wasn’t sure she could find the time.
This semester, Laura van den Berg is teaching a fiction workshop, “The Art of the Short Story.”
Stephanie Mitchell/Harvard Staff Photographer
“It can be easy to feel like, ‘Oh, I’ll do it next year when I’m not publishing a book, when I have more room in my schedule,’” van den Berg recalled. “I think I just realized at a certain point that that the ‘chill year’ that I’m waiting for is never going to arrive, and I just need to dive in.”
She began training hard with the help of a coach at Cherry Street Boxing in Pittsfield, Massachusetts. She focused on both technique and endurance to prepare for the amateur match, which includes three rounds, each a minute and a half to three minutes in length.
Even during her book tour in July and August, van den Berg kept up her routine, waking early to run up hills in San Francisco, for example, before catching a flight to her next tour stop.
“You know your future opponent is out there somewhere, and you should assume that she’s getting up in the morning, doing whatever she needs to do to be successful on fight night,” van den Berg said. “That motivates me.”
This semester, van den Berg is teaching a fiction workshop, “The Art of the Short Story,” and co-teaching “Reading for Fiction Writers” with Neel Mukherjee, associate senior lecturer in creative writing. She divides her time between Cambridge and her Hudson Valley home in New York state, where she spends weekends, school breaks, and summers. While Cherry Street is her primary gym, she can often be found training at Redline Fight Sports when she is in Cambridge.
She often travels to sparring events around the New England area, seeking out new female boxers to practice with, something that can help prepare her for facing an unknown opponent on fight night.
To document her parallel journeys, van den Berg started a newsletter called “Fight Week,” where she writes about the intersection of writing and fighting. The title refers to the days leading up to a match, following the last hard training session — a time she describes as standing on the edge of a precipice, “about to step off into the air.”
Van den Berg believes writing and boxing both require a certain level of comfort with risk — whether it’s risking failure with a new narrative structure or injury in the ring.
“‘State of Paradise’ is certainly the most personal book that I’ve ever written. There’s a lot of me in there, so it was emotionally risky in a way that my other books really haven’t been,” van den Berg said. “In boxing, the more punches you throw, the more vulnerable you are. There’s risk in going hard.”
As the weekend fight approaches, van den Berg feels confident, embracing the nerves that come with the territory and knowing she’s done everything she can to prepare.
“There’s no way to know for sure what the outcome will be,” van den Berg said. “If you’re 100 pages into a novel, you can’t say for sure what it will be like when you finish. You can’t know for sure what the outcome of a fight will be. You have to have a deep belief in the process and cultivate a tolerance for sitting in doubt and uncertainty and being able to move through those emotions.”
Photos by Stephanie Mitchell/Harvard Staff Photographer
Christy DeSmith
Harvard Staff Writer
7 min read
Michael J. Sandel brings back wildly popular ‘Justice’ course amid time of strained discourse on college campuses
Which is better? The 1996 film adaptation of “Hamlet” starring Kenneth Branagh or a spoof on the Prince of Denmark’s “to be or not to be” soliloquy delivered by Homer Simpson?
The question was posed last month to more than 800 undergraduates during “Justice: Ethical Reasoning in Polarized Times.” The legendary Gen Ed offering has returned to Sanders Theatre this semester after more than a decade of availability only as a prerecorded offering online. Originally launched in 1980, the course became wildly popular for its format: guided student debate of the hottest issues of the day, informed by study of classictheories on moral decision-making.
“The first thing I want to know is which one you like most,” announced “Justice” creator Michael J. Sandel, the Anne T. and Robert M. Bass Professor of Government. Most in the room, dismissing the third option of a WWE body-slam clip, lavished applause on “The Simpsons.”
But the conversation turned spiky on a second point: “Can we derive what’s higher, what’s worthier, what’s nobler, from what we like most?” Sandel asked. “Or is there a gap between the two?”
One student stuck with “The Simpsons,” arguing that it was the most pleasurable. Another noted the cartoon excerpt owed its existence to Shakespeare’s worthier source material. Still others characterized the TV show as fleeting pleasure versus the intellectual and even spiritual nourishment of high art.
From the stage, Sandel invited these divergent responses while pressing the room to consider new angles. “It’s an opportunity for students to dive into why they think the way they do,” observed Darlene Uzoigwe ’25, a government concentrator from Brooklyn.
Michael Sandel (foreground) takes a question from Yaroslav Davletshin ’28 in the balcony.
Generations of Harvard graduates, including U.S. Supreme Court Justice Ketanji Brown Jackson ’92, J.D. ’96, and former U.S. Attorney for the Southern District of New York Preet Bharara ’90, have cited the impact the course had on their careers and lives. In 2009, a recorded version went on to become the first Harvard course freely available online.
“It was an experiment in using new technology to open access to the Harvard classroom,” Sandel said in an interview last month. “We never dreamt that tens of millions of people around the world would want to watch lectures on philosophy.” More than 38 million have viewed the course on YouTube, and millions more on foreign language web platforms.
The in-person offering was paused when the political philosopher noticed first-years enrolling despite having watched “Justice” in high school. “I tried for a couple of years to change some of the examples, stories, even the jokes,” Sandel said. “But I found I liked the original version and didn’t want to change everything. So, I decided to let it live online and teach other subjects.”
Most of these courses, including those on technology and globalization, were capped at 200 students. Trevor DePodesta kept trying to get a spot.
“It took me until now to get into one of his classes,” said DePodesta ’25, an Ethical Human-AI Interaction concentrator from San Diego. “I felt like the Harvard experience wouldn’t be complete until I sat in a lecture hall with him.”
Trevor DePodesta ’25 (center) asks a question.
As any alum of the course will know, the Homer-Hamlet matchup is really about exploring ideas about high and low pleasures outlined by 19th-century Utilitarian philosopher John Stuart Mill.
“I didn’t think you could discuss ‘Simpsons’ vs. ‘Hamlet’ for so long,” said Saskia Hermann ’28, a first-year from Germany. “What I’ve learned so far is that you can always look at something twice — once from your initial point of view, and then you can apply a certain philosophical idea to look at it from that perspective.”
Also familiar to “Justice” veterans are course readings by Mill’s Utilitarian predecessor Jeremy Bentham as well as Aristotle, Immanuel Kant, and John Rawls. Bringing the class back to Sanders means Sandel has now updated the ethically charged issues that test the philosophers’ ideas.
The first weeks of the course, which carries, at times, the fizzy energy of a concert, covered readings and lectures on the philanthropic movement reportedly embraced by former cryptocurrency billionaire Sam Bankman-Fried, who was convicted this year on fraud charges.In one hypothetical, Sandel posed whether it was better to solve urgent medical needs in the developing world by becoming a physician or by making a lot of money in cryptocurrency and donating, say, $50 million to Doctors Without Borders.
“It’s a test of the Utilitarian philosophy underlying the Effective Altruism movement,” he explained.
Later, students will delve into climate change, artificial intelligence, and the polarizing consequences of social media. “From the time the course was first offered in the 1980s we’ve discussed affirmative action,” Sandel added. “Now we continue the discussion in light of the U.S. Supreme Court ruling against race-based affirmative action in university admissions.”
The real motivation for relaunching the course, Sandel said, was student feedback about the strained state of dialogue on campus. “It’s definitely true that there isn’t a lot of civil discourse going on,” said Maia Hoffenberg ’26, a linguistics concentrator from the Washington, D.C., area. “People have become entrenched in their own ideas.”
Sandel has been a supporter of other campus initiatives designed to boost civil discourse and intellectual vitality. Last winter he hosted a one-day session for faculty on cultivating healthy debate and another inviting students to grapple with the tangled ethics of artificial intelligence. At orientation this fall, Sandel offered a primer on engaging with highly disputed but important issues.
Hermann enjoyed the latter conversation so much she dropped a course to pick up “Justice.” “I really liked the way he asked questions and made us get to the point, rather than just lecturing on what he believes,” she said.
“I really liked the way he asked questions and made us get to the point, rather than just lecturing on what he believes.”
Saskia Hermann
Others taking “Justice” have been reading Sandel since high school — and clearly consider themselves fans.
“Yeah, that’s my boy!” called out one student as Sandel appeared for a lecture last month.
After class, a handful of devotees line up near the podium clutching dog-eared copies of Sandel’s “Justice” (2009) or “The Tyranny of Merit” (2020). “I’ve been able to snag him after class a few times,” DePodesta shared. “He promised that if I come find him outside of class, he’ll sign my books.”
Offering such a large-scale course meant Sandel spent the summer recruiting, interviewing, and hiring an army of “Justice” teaching fellows. A staff of 32 graduate students this fall helms a total of 64 sections, with many scheduled in the early evening at first-year dorms and various Harvard Houses. The goal is to “carry the learning beyond Sanders Theatre,” said Sandel, who won’t teach the course next fall due to a planned sabbatical.
That encourages students to continue conversations about immigration, abortion, reparations, and extreme wealth — to name a few topics — over dinner. “My roommates and I have these intense debates after every single class,” shared Leverett House resident Hoffenberg. “We get mad at each other, but it’s all very lively and very academic. It’s honestly been one of the best things about taking the class.”
Bit of happenstance, second look at ancient fossils leads to new insights into evolution of tardigrade, one of most indestructible life forms on planet
They may be microscopic, but tardigrades are larger than life.
Called “water bears” because of their plump shape and lumbering movement, the ancient micro-animals are nearly indestructible, able to survive anything from deadly radiation and arctic temperatures to the vacuum of space.
They can still be found anywhere there’s water today, but the evolutionary history of these eight-legged micro-animals remains relatively mysterious because of their sparse fossil record.
Now, in a new study published in Communications Biology, Associate Professor of Organismic and Evolutionary Biology Javier Ortega-Hernández and Ph.D. candidate Marc Mapalo were able to confirm another entry in the fossil record, which now stands at just four specimens. The research represents a significant advancement in the field of paleontology because it offers new avenues for exploring the evolutionary history of one of the planet’s most resilient life forms.
In their study, the researchers examined a piece of amber found in Canada in the 1960s that contains the known fossil tardigrade Beorn leggi and another presumed tardigrade that couldn’t be substantively described at the time. Using confocal laser microscopy, a method usually employed for studying cell biology, the researchers were able to examine the tiny structures of the fossil tardigrades in detail.
Ortega-Hernández and Mapalo’s study provides not only a definitive classification of B. leggi in the tardigrade family tree, but the identification of a new species of tardigrade as well.
“Both of them are found in the same piece of amber that dates to the Cretaceous Period, which means that these water bears lived alongside dinosaurs,” Ortega-Hernández said. “The images of B. leggi show seven well-preserved claws, with the claws that curve toward the body being smaller than those curving away from it, a pattern found in modern-day tardigrades.”
Amber with Beorn and Aerobius. Artistic reconstruction of the two fossil specimens.
Photo by Marc Mapalo; Illustration by Anthony Franz.
The second, previously unidentified specimen, had claws of similar length on each of its first three pairs of legs, but longer outer claws on its fourth set of legs. The team named it Aerobius dactylus, from “aero” meaning relating to air — because the fossil appears to be floating on air in the amber — and “dactylo,” or finger, after its one long claw.
The impetus for applying this new technology to known fossils came when Mapalo, a self-described “paleo-tardigradologist,” came across the 2019 book “Water Bears: The Biology of Tardigrades.”
“In one of the chapters, they had a photo of the oldest fossil tardigrade that was visualized using both normal microscopy and confocal laser microscopy,” Mapalo said. “And that gave me the idea to use that with the fossil that I’m working with right now.”
That fossil, encased in a piece of amber from the Dominican Republic, turned out to be a new species of tardigrade. Mapalo, along with Ortega-Hernández and researchers from the New Jersey Institute of Technology, published their findings in a 2021 paper.
Left: Ventral view of Beorn leggi photographed with transmitted light under compound microscope (A), with autofluorescence under confocal microscope (B), and schematic drawing; Right: Habitus of Aerobius dactylus ventral (A,D) and dorsal view (E,F) photographed using confocal microscope and compound microscope. Schematic drawing (C), specimen and claws viewed in inverted greyscale to highlight autofluorescence intensity (D,F).
Source: “Cretaceous amber inclusions illuminate the evolutionary origin of tardigrades”
In their latest study, both fossils serve as critical calibration points for what’s called molecular clock analysis, which help scientists estimate the timing of key evolutionary events. For example, the latest findings suggest that modern tardigrades likely diverged during the Cambrian Period more than 500 million years ago.
The research also sheds light on the origin of cryptobiosis, the technical name for the remarkable ability of tardigrades to survive extreme conditions by entering a state of stasis.
“The study estimates that this survival mechanism likely evolved during the mid- to late Paleozoic, which may have played a crucial role in helping tardigrades endure the end-Permian mass extinction, one of the most severe extinction events in Earth’s history,” Ortega-Hernández said.
“Before I started my Ph.D., there were only three known fossil tardigrades, and now there’s four,” Mapalo said. “Most, if not all, of the fossil tardigrades were really discovered by chance. With the Dominican amber, researchers were looking for fossil ants, and they happened to see a fossil tardigrade there.
“That’s why, whenever I have a chance, I always tell researchers who are working with amber fossils to check if maybe there’s another tardigrade in there, waiting to be found.”
Studies connect genetics, physics in embryonic development
Genes are the control panel for an embryo morphing from a ball of cells into organs, muscles, and limbs, but there’s more involved than just genetics. There’s also physics — the shaping of tissues by flows and forces from cellular activity and growth.
Two recent studies in Developmental Cell and Proceedings of the National Academy of Sciences shed light on the gene-mediated geometries and forces within embryonic development that give rise to different sections and shapes of the gut, including the large and small intestines. The findings bridge a critical gap between genetic signals and the physical formation of the early gut.
The Developmental Cell paper, led by former Griffin Graduate School of Arts and Sciences student Hasreet Gill, shows how a set of developmental instructions called Hox genes dictate gut formation. For the study, Gill and colleagues traced the gut development of a chicken embryo as a model organism; Hox genes are also found in humans and all other vertebrates.
“I wanted to understand why different regions in the intestine, from the anterior, meaning esophagus, to the posterior, meaning large intestine, end up with different shapes,” said Gill, who co-authored both papers with her Ph.D. adviser Clifford Tabin, the George Jacob and Jacqueline Hazel Leder Professor of Genetics at Harvard Medical School. Gill was a student in the Department of Molecular and Cellular Biology’s Molecules, Cells, and Organisms program.
“I wanted to understand why different regions in the intestine, from the anterior, meaning esophagus, to the posterior, meaning large intestine, end up with different shapes.”
Hasreet Gill
The study connected experiment to computational theory through a collaboration with Sifan Yin, a former postdoctoral fellow in the John A. Paulson School of Engineering and Applied Sciences, and L. Mahadevan, professor in applied mathematics, physics, and biology in SEAS and FAS.
Gill’s study built on previous work looking at how Hox genes are involved in organ differentiation. The set of genes, highly conserved throughout animal evolutionary history, was the subject of the 1995 Nobel Prize when they were recognized for their role in segmenting a fruit fly’s body.
Gill and colleagues discovered that measurable mechanical properties of the tissues that make up the large and small intestines of a chick embryo are directly involved in how they arrive at their final shapes. For example, the tissues that form the villi located in the small intestine, she found, have different stiffness parameters than those that shape the inside walls of the large intestine, which form larger, flatter, more superficial folds.
To test the consequences of all these mechanical differences, the lab turned to its longstanding collaboration with Mahadevan’s lab, whose members, including Yin, carried out theoretical and computational analysis to define the impact of physical forces generated via differential growth on organ shape.
It had long been known that Hox genes are the instructions that lay the groundwork for how different organs, including the gut, are sectioned off and shaped. But the detailed “how” of this process had been a mystery.
To solve it, Gill and colleagues revisited a 1990s-era experiment from the Tabin lab that had investigated this question. In that experiment, they expressed a particular Hox gene in a small intestine and found it took on the characteristics of a large intestine.
Gill’s team repeated the experiment while running physical tests on the mechanical characteristics of the different parts of the gut, considering things like wall stiffness, growth rate, and tissue thickness. They found that the HoxD13 gene in particular regulates the mechanical properties and growth rates of the tissues that eventually lead to the large intestine’s final shape. Other, related Hox genes may define those same properties for the small intestine.
Crucially, they also illuminated the role of a downstream signaling pathway called TGF Beta, which is controlled by Hox genes. By tuning the amount of TGF beta signaling in their embryos, they could switch the shapes of the different gut regions. Seeing the importance of this pathway, long known to be involved in fibrotic conditions, was an important basic-science step toward fully understanding gut development in a vertebrate system.
These insights could lead to new knowledge of conditions for colon cancer and other fibrotic diseases of the gut, Gill said.
“One possibility is that the disease is co-opting a developmental program that can cause an excessive deposition of extracellular matrix, and this ends up being harmful to the patient,” she said. “Having this developmental context, especially related to Hox gene expression, might prove useful at least for understanding the broader context of why these diseases are happening in people.”
The complementary PNAS paper, co-led by Gill and Yin, showed how geometry, elastic properties, and growth rates control various mechanical patterns in different parts of the gut.
“We focused on how mechanical and geometric properties directly affect morphologies, especially more complicated, secondary buckling patterns, like period-doubling and multiscale creasing-wrinkling patterns,” said Yin, an expert in theoretical modeling and numerical simulations of active and growing soft tissues.
Added Mahadevan: “These studies allow us to begin probing aspects of the developmental plasticity of gut development, especially in an evolutionary context. Could it be that natural variations in the genetic signals lead to the variety of functional gut morphologies that are seen across species? And might these signals be themselves a function of environmental variables, such as the diet of an organism?”
Yin said the two papers provide a new paradigm for studying how genes affect the development of shape, or morphogenesis.
“Morphogenesis is driven by forces arising from cellular events, tissue dynamics, and interactions with the environment,” Yin said. “Our studies bridge the gap between molecular biology and mechanical processes.”
Using a new method, researchers at ETH Zurich can measure alterations in the social network of proteins in cells. This work lays the foundation for the development of new drugs to treat diseases such as cancer and Alzheimer’s.
It is Nobel prize-giving season and last week alumni of Cambridge University were awarded four of them for their brilliant work. In a leading article last week, The Times pointed out that if it were a country, Cambridge’s total of 125 would place it third for Nobel laureates behind only the UK and the US. This is a tribute to outstanding work which changes lives.
As an American who took over the role of vice-chancellor a little over a year ago, I am struck by how often the value of world-class research universities — to our economy, and to society and our daily lives — is underestimated here in Britain. Two of the top five in the world are British and in higher education terms this country punches well above its weight. This week we will announce a huge new investment in transformative research focused on a common foe: cancer. It will bring global benefits.
Our world-class, research-intensive universities are national assets. They can be genuine drivers of economic growth. Cambridge research contributes a staggering £30 billion to the UK economy each year. By contrast, no single US university plays such a national role.
And yet some UK research universities are in a precarious financial state. They are vital to their local and regional communities, as well as to Britain as a whole. They need more than just recognition if they are to drive future sustainable growth: they need investment, yes, but also support to innovate so they can continue to break new research frontiers and to serve their communities.
In Cambridge, we plan to launch an innovation hub that attracts and hosts the best researchers from around the world, plus entrepreneurs, funders and philanthropists. Under one roof, ideas will be driven forward, before they are spun out. The US and France have successfully pointed the way in these hothouses for innovation, in Boston and Paris. The UK must catch up — and fast.
With the budget looming, even — or perhaps especially — in tough times, the government must see our research universities as key allies and partners in its mission to drive economic growth. They are one of the main advantages this country enjoys in the global race for economic success.
At the start of a new academic year, thousands of students and researchers have arrived in Cambridge and at other British universities. We should invest in our world-class institutions and their contributions to tackling society’s greatest challenges so that in the decades to come we will have more Nobel laureates to celebrate.
This article first appeared in The Times on 14 October 2024.
Professor Deborah Prentice, Vice-Chancellor of the University of Cambridge, writes in The Times about how universities can drive UK growth - but they need more than just recognition.
He studies cancer cells and their cellular environment in order to new therapies. Now, ETH Zurich Professor Andrea Alimonti is being awarded the Cloëtta Prize.
Religious communities around the world have been confronted with the advent of digital media platforms, a trend that began even before the COVID-19 pandemic. Since the adoption of these platforms, religious communities are no longer as bound to physical spaces as they used to be. How, then, has the adoption and usage of digital media platforms affected religious community and practice in Singapore? While the COVID-19 pandemic has increased the intensity of digitalisation among religious organisations, digitalisation is not equally distributed across religious activities and organisations.
In our survey of religious organisations – the pilot study of a series of studies conducted in conjunction with the NUS Asia Research Institute (ARI) that explores the evolving relationship between religious communities and digital media in Singapore – we examine and document digitalisation among local religious organisations. Underpinning our survey was the question of whether the COVID-19 pandemic provided the impetus for religious organisations to adopt digital media platforms in Singapore. To this end, we found that the COVID-19 pandemic increased the rate of adoption among religious organisations that had not yet adopted digital media platforms before the pandemic (mainly non-Muslim and non-Christian organisations) while, among those that already began using digital media platforms before the pandemic (mainly Christian and Muslim organisations), usage of digital media platforms increased.
Yet, while religious organisations reported using digital media platforms more in general, certain activities conducted by religious organisations have resisted going online more than others. Based on data gathered on the percentage of activities conducted online compared to offline, we found that synchronous activities (activities that require real-time interaction) were less likely than asynchronous activities (activities that do not require real-time interaction) to undergo digitalisation. Religious organisations reported the most instances of digitalising administrative tasks that did not require synchronous in-person participation. These administrative activities include the internal circulation of announcements, newsletters, and circulars, as well as the collection of offerings and donations and the dissemination of religious materials for self-study. On the other hand, synchronous activities that resisted digitalisation include the conduct of prayers and rituals, as well as meetings (study groups and committee meetings), and community-building activities.
Digitalisation and the blurring of boundaries between public and private spaces
The resistance of synchronous activities to digitalisation is noteworthy as, despite the availability of technology that can and has simulated meetings and worship sessions during the pandemic, respondents still reported conducting these activities mostly offline. While many respondents indicated an intention to further digitalise, it remains to be seen what the focus of their digitalisation efforts will be and which activities will be digitalised more than others. These will allow for a deeper understanding of the pathways through which digitalisation can affect religious community and practice. For instance, some religious organisations reported feeling ambivalent about their experience of digitalisation given how digitalisation has blurred the boundaries between public and private spaces – where anything said or done in private settings could potentially become public due to livestreaming or speech, text, or footage becoming available to unintended audiences.
On the one hand, digital media platforms afford more privacy as they do not have to take place in public spaces. On the other hand, the potentially public nature of online interactions could have far-reaching and long-lasting effects. As sociologist José van Dijck argues in Engineering Sociality in a Culture of Connectivity (2013), social media has “unquestionably altered the nature of private and public communication” (pp. 3-23). With the advent of digital media platforms and the increasing digitalisation of religious services, the lines between public and private spheres are blurring. This blurring of boundaries between private and public has legal implications, an instance of which pertains to what is considered public speech. If all private social interaction potentially reaches a public audience, what can be considered and liable as private or public utterance? More broadly, how has the potentially increased range of audiences afforded by digital media platform adoption changed the way members of religious communities relate to each other? How do the new ways of forming and sustaining relationships in religious communities afforded by technology affect the way religious beliefs are enacted?
Most existing laws pertaining to religious practice apply to religious activities conducted in public spaces as the premise was that religion had to be enacted in public spaces. Our findings suggest that it may still be some time before the changes in religious practice caused by digitalisation are substantial enough for a radical legal overhaul as “core” religious activities – that is, religious activities that set religions apart from each other such as worship activities – are the activities most resistant to digitalisation. In spite of the observed trend towards increased digitalisation, these findings belie a sense of attachment to physical space among religious practitioners in Singapore.
As such, further research should be conducted to understand the reasons why some organisations and activities are resistant to digitalisation. This would help us answer the important question of the role that doctrinal and theological factors play in the adoption or non-adoption of digital media platforms. Such research contributes to our understanding of the limits of digital and technological mediation in replacing real-time, physically-bounded interaction. As sociologist Craig Calhoun ponders in his classic piece, Community Without PropinquityRevisited (1998), what is community without the physical presence of others in the same space? Our results support the idea that the “magic” of physical interaction is still an important factor to consider when thinking about the relationship between religious practice and digitalisation. Perhaps the complexity and, indeed, “warmth” of real-time social interaction cannot be replaced by digitally-mediated interaction – at least not just yet.
By using religion as a case study through which we study the socio-legal effects of digitalisation on religious communities, our study speaks to scholarship on the evolving relationship between digital media and society. These findings regarding digitalisation as experienced by religious organisations bring to mind broader issues pertaining to digitalisation as experienced in society – issues that have to do with how technology is affecting the way we relate to each other as well as the limits of technology in replacing the physical, real-time interaction traditionally associated with community.
Benjamin Low is a Research Associate with the Centre for Asian Legal Studies (CALS) at the NUS Faculty of Law. At CALS, he is currently working on a research project exploring the socio-legal implications of the digitalisation of religion in Singapore that is so new that it has yet to be formally named. A cultural and organisational sociologist specialising in social networks and innovation, he is completing his doctoral studies at the University of Oxford Department of Sociology. A seasoned jazz drummer and National Arts Council Scholarship recipient, he can be found making music in and around Singapore when not (actively) doing research. This piece was written on behalf the CALS-ARI research team which consists of Principal Investigator Associate Professor Jaclyn Neo (CALS), co-Principal Investigator Dr Erica Larson (ARI), and the author. This research project is generously funded by the Humanities and Social Sciences Seed Fund grant.
With its latest space mission successfully launched, NASA is set to return for a close-up investigation of Jupiter’s moon Europa. Yesterday at 12:06 p.m. EDT, the Europa Clipper lifted off via SpaceX Falcon Heavy rocket on a mission that will take a close look atEuropa’s icy surface. Five years from now, the spacecraft will visit the moon, which hosts a water ocean covered by a water-ice shell. The spacecraft’s mission is to learn more about the composition and geology of the moon’s surface and interior and to assess its astrobiological potential. Because of Jupiter’s intense radiation environment, Europa Clipper will conduct a series of flybys, with its closest approach bringing it within just 16 miles of Europa’s surface.
MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS) Research Scientist Jason Soderblom is a co-investigator on two of the spacecraft’s instruments: the Europa Imaging System and the Mapping Imaging Spectrometer for Europa. Over the past nine years, he and his fellow team members have been building imaging and mapping instruments to study Europa’s surface in detail to gain a better understanding of previously seen geologic features, as well as the chemical composition of the materials that are present. Here, he describes the mission's primary plans and goals.
Q: What do we currently know about Europa’s surface?
A: We know from NASA Galileo mission data that the surface crust is relatively thin, but we don’t know how thin it is. One of the goals of the Europa Clipper mission is to measure the thickness of that ice shell. The surface is riddled with fractures that indicate tectonism is actively resurfacing the moon. Its crust is primarily composed of water ice, but there are also exposures of non-ice material along these fractures and ridges that we believe include material coming up from within Europa.
One of the things that makes investigating the materials on the surface more difficult is the environment. Jupiter is a significant source of radiation, and Europa is relatively close to Jupiter. That radiation modifies the materials on the surface; understanding that radiation damage is a key component to understanding the composition.
This is also what drives the clipper-style mission and gives the mission its name: we clip by Europa, collect data, and then spend the majority of our time outside of the radiation environment. That allows us time to download the data, analyze it, and make plans for the next flyby.
Q: Did that pose a significant challenge when it came to instrument design?
A: Yes, and this is one of the reasons that we're just now returning to do this mission. The concept of this mission came about around the time of the Galileo mission in the late 1990s, so it's been roughly 25 years since scientists first wanted to carry out this mission. A lot of that time has been figuring out how to deal with the radiation environment.
There's a lot of tricks that we've been developing over the years. The instruments are heavily shielded, and lots of modeling has gone into figuring exactly where to put that shielding. We've also developed very specific techniques to collect data. For example, by taking a whole bunch of short observations, we can look for the signature of this radiation noise, remove it from the little bits of data here and there, add the good data together, and end up with a low-radiation-noise observation.
A: The camera system [EIS] is primarily focused on understanding the physics and the geology that's driving processes on the surface, looking for: fractured zones; regions that we refer to as chaos terrain, where it looks like icebergs have been suspended in a slurry of water and have jumbled around and mixed and twisted; regions where we believe the surface is colliding and subduction is occurring, so one section of the surface is going beneath the other; and other regions that are spreading, so new surface is being created like our mid-ocean ridges on Earth.
The spectrometer’s [MISE] primary function is to constrain the composition of the surface. In particular, we're really interested in sections where we think liquid water might have come to the surface. Understanding what material is from within Europa and what material is being deposited from external sources is also important, and separating that is necessary to understand the composition of those coming from Europa and using that to learn about the composition of the subsurface ocean.
There is an intersection between those two, and that's my interest in the mission. We have color imaging with our imaging system that can provide some crude understanding of the composition, and there is a mapping component to our spectrometer that allows us to understand how the materials that we're detecting are physically distributed and correlate with the geology. So there's a way to examine the intersection of those two disciplines — to extrapolate the compositional information derived from the spectrometer to much higher resolutions using the camera, and to extrapolate the geological information that we learn from the camera to the compositional constraints from the spectrometer.
Q: How do those mission goals align with the research that you've been doing here at MIT?
A: One of the other major missions that I've been involved with was the Cassini mission, primarily working with the Visual and Infrared Spectrometer team to understand the geology and composition of Saturn's moon Titan. That instrument is very similar to the MISE instrument, both in function and in science objective, and so there's a very strong connection between that and the Europa Clipper mission. For another mission, for which I’m leading the camera team, is working to retrieve a sample of a comet, and my primary function on that mission is understanding the geology of the cometary surface.
Q: What are you most excited about learning from the Europa Clipper mission?
A: I'm most fascinated with some of these very unique geologic features that we see on the surface of Europa, understanding the composition of the material that is involved, and the processes that are driving those features. In particular, the chaos terrains and the fractures that we see on the surface.
Q: It's going to be a while before the spacecraft finally reaches Europa. What work needs to be done in the meantime?
A: A key component of this mission will be the laboratory work here on Earth, expanding our spectral libraries so that when we collect a spectrum of Europa's surface, we can compare that to laboratory measurements. We are also in the process of developing a number of models to allow us to, for example, understand how a material might process and change starting in the ocean and working its way up through fractures and eventually to the surface. Developing these models now is an important piece before we collect these data, then we can make corrections and get improved observations as the mission progresses. Making the best and most efficient use of the spacecraft resources requires an ability to reprogram and refine observations in real-time.
Threat of mosquito-borne diseases rises in U.S. with global temperature
Experts fear more cases of West Nile virus, EEE (and possibly Zika, Dengue fever) as warm seasons get longer, wetter
Alvin Powell
Harvard Staff Writer
6 min read
Crisper fall weather is descending, signaling the coming end of another mosquito season that this year saw modest outbreaks of West Nile virus and eastern equine encephalitis.
The good news has been that the disease-carrying mosquitoes would rather bite birds than humans, a factor in keeping the maladies relatively rare. The bad news is that a warming world is expected to add months to mosquito season and, worse, that species with a stronger taste for humans are headed north.
Recent studies have projected that by 2050 longer autumns and earlier springs will extend the U.S. mosquito season by as much as two months.
The Centers for Disease Control and Prevention says this year there have been just 880 U.S. cases of West Nile, the most common mosquito-borne disease in the continental U.S. EEE is rarer still, with just 13 cases in seven states this year.
That rarity is a good thing because both can be deadly.
Though most cases are mild or asymptomatic, one in 150 cases of West Nile can be severe (as was the recent case of Anthony Fauci, former director of the National Institute of Allergy and Infectious Diseases) and one in 10 severe cases result in death, according to the CDC. The numbers are more sobering for EEE, with most of the cases reported each year being severe and 30 percent, on average, resulting in death. Seven of the known cases this year have been fatal.
The cooler temperatures that come with autumn are beginning to ease the EEE outbreak. In early October, Massachusetts public health officials lowered EEE risk warnings in the worst-hit parts of the state from critical to moderate. The risks for West Nile remained unchanged — high or moderate over large portions of the state — and experts warn that cooler weather alone doesn’t stop transmission. Mosquitoes remain active until killed by frost, which has been happening later in recent years.
In fact, recent studies have projected that by 2050 longer autumns and earlier springs will extend the U.S. mosquito season by as much as two months. Those months are expected to be warmer and wetter, providing more standing water where mosquitoes can breed. The extra time also means more gestational cycles so more biting by females, who must have a blood meal before laying eggs.
Professor Flaminia Catteruccia with mosquito cages at Harvard Chan School.
Photo by Dylan Goodman
“You have more bites, more areas where they’re able to live, more months when they’re active, and more places for them to breed. That means larger populations,” said Matthew Phillips, a research fellow in infectious diseases at Harvard Medical School and Massachusetts General Hospital. “All of this is expected based just on changes in climate that affect mosquitoes.”
Evidence of the trends has already been seen, Phillips said. In 2021, during one of the hottest Decembers on record, the CDC recorded 30 cases of West Nile virus. Even at MGH in chilly Boston, the trend has been evident, albeit with diseases spread by hardier insect vectors.
“We were seeing cases of anaplasmosis and babesiosis, diseases that are spread by ticks and can be potentially pretty serious,” Phillips said. “Typically, you’d see them in summertime, but we were seeing those in the middle of winter.”
Experts are also keeping an eye on two invasive species that have already established themselves in the nation’s Southeast and are beginning to spread north. The mosquitoes, Aedes aegypti and Aedes albopictus, can carry several viral diseases, including West Nile and EEE. But unlike the mosquitoes currently spreading those diseases, both prefer humans. Estimates by Canadian researchers in 2020 showed the species spreading to the West Coast and the Canadian border by 2080.
Epidemiologists note the two species pose some additional threats. Besides West Nile and EEE, these mosquitoes can carry Zika, which caused 3,500 cases of microcephaly among infants during Brazil’s 2015-16 outbreak. They can also spread the tropical diseases Dengue fever — called “break-bone fever” because of the intensity of its pain — and Chikungunya, a tropical fever with no known treatment or cure. Public health officials in Florida and California reported cases of Dengue fever this year.
In 2021, during one of the hottest Decembers on record, the CDC recorded 30 cases of West Nile virus.
Aedes aegypti is more efficient at spreading diseases like Zika and Dengue, but when discussing near-term threats, both Phillips and Flaminia Catteruccia, a mosquito expert at the Harvard T.H. Chan School of Public Health, point to albopictus as the most concerning. Not only is it hardier, it’s already starting to appear.
“It’s only recently been seen in Massachusetts and is very good at transmitting viruses,” said Catteruccia, professor of immunology and infectious diseases and a Howard Hughes Medical Institute investigator. “That’s my bit of worry: If it becomes really prevalent, we might see more transmission. But it remains to be established whether the environmental conditions, especially the long winters here, will be hospitable enough for these mosquitoes to survive.”
Alongside the new threats is the possibility of an older one that may make a reappearance: malaria. In 2023, there were malaria outbreaks in Florida, Texas, and Maryland that could not be traced to someone arriving from a malaria-endemic country. The apparently local acquisition of the disease is concerning because malaria was responsible for more than 600,000 deaths in 85 countries in 2022. It’s also not a newcomer to the U.S. Malaria circulated widely here from Colonial times until it was eradicated in 1951.
Catteruccia said that malaria has an advantage in favor of its U.S. spread: the Anopheles mosquitoes that host the malaria parasite are already widespread here. Counterbalancing that is the fact that those mosquitoes prefer animals to humans. Cold winters also provide a shield.
“Malaria used to be here in the states, so the mosquitoes are around and are potential vectors of malaria,” Catteruccia said. “But malaria has a very complex lifecycle, so especially here in the north, I don’t see this becoming an issue for the time being.”
With shifting disease patterns already happening, Phillips said our understanding of the epidemiology of those diseaseshas to change as well. Physicians who diagnose patients during winter shouldn’t automatically rule out ailments traditionally seen in summer. And those diagnosing in summer shouldn’t rule out ailments from warmer regions.
“One thing that climate change does is it changes the traditional epidemiology of mosquito-borne diseases,” Phillips said. “We’re used to these diseases in the summertime, and they’re showing up in winter. We’re used to them being in the tropics and they’re showing up in temperate climates. These traditional epidemiological associations are breaking down and, as they break down, we need better disease monitoring to know where they’re going and what they’re doing.”
How whales and dolphins adapted for life on the water
A common dolphin off the coast of Australia.
Credit: Amandine Gillet.
Wendy Heywood
Harvard Correspondent
6 min read
Backbones of ocean-dwelling mammals evolved differently than those of species living closer to shore, study finds
If you’ve ever seen dolphins swim, you may have wondered why they undulate instead of moving side to side as fish do. Though they have a fishlike body, cetaceans, which include whales, dolphins, and porpoises, are mammals that descended from land-dwelling ancestors.
Cetaceans have undergone profound changes in their skeletal structures to thrive in aquatic environments, including the reduction of hindlimbs and the evolution of flippers and tail flukes, resulting in a streamlined body. Scientists still don’t understand how the transition from land to water, approximately 53 million years ago, impacted cetaceans’ backbone, a central element of their skeleton.
A new study in Nature Communications sheds light on how these marine mammals’ backbones were reorganized as their ancestors adapted to life in water. The international Harvard-led team found that, contrary to previous assumptions, the cetacean backbone is highly regionalized, despite being homogeneous in shape along its length. The way in the backbone is regionalized, however, is drastically different from terrestrial mammals.
The team also explored how regions in the backbone correlate to habitat and swimming speed. They discovered that species living farther from the coast have more vertebrae, more regions, and higher burst swimming speed. Species living in rivers and bays, so closer to shore, have fewer vertebrae and regions, but their regions differ more from one another, potentially affording them greater maneuverability.
“When their ancestor went back into the water, whales and dolphins lost their hind legs and developed a fish-like body,” said lead author Amandine Gillet, Marie Curie Fellow in the Department of Organismic and Evolutionary Biology at Harvard and in the Department of Earth and Environmental Sciences at the University of Manchester. “But that morphological change also means the vertebral column is now the main part of the skeleton driving locomotion in an aquatic environment.”
The vertebral column of terrestrial mammals moving on land must provide support to help the legs carry the body weight. When cetaceans transitioned from land to water, the forces of gravity shifted from air to buoyant water, releasing the pressure to carry body weight. The new body structure and movements needed to move through water meant the backbone of these animals would have to shift in some way to fit their new environments.
Previous studies have looked at the backbone from a vertebral morphological view. In a 2018 Science paper, co-authors Stephanie Pierce and Katrina Jones explored the complex evolutionary history of the mammalian backbone using a novel statistical method first developed to study the backbones of snakes. Pierce and Jones revised the model to fit their study, which allowed them to demonstrate that the vertebral column of terrestrial mammals is characterized by numerous, distinct regions in comparison to amphibians and reptiles.
Comparing backbones of species living in shallow waters (left) and the open ocean (right) shows differences in number of vertebrae, regions, and modules.
Credit: Amandine Gillet
“It’s a challenge to understand how the regions of a terrestrial mammal’s backbone can be found in whales and dolphins, and one reason is because their backbone looks very different in terms of morphology, even though they evolved from them,” said Pierce, a professor in the Department of Organismic and Evolutionary Biology at Harvard and senior author of the study. “They lost the sacrum, a fused string of vertebrae bracing the hind legs, and a critical landmark needed to distinguish the tail from the rest of the body.”
The vertebrae of cetaceans are further complicated in that they became more homogeneous in their anatomical features. So the transition from one vertebra to another is gradual compared to the extreme transitions found in terrestrial mammals, making it more difficult to identify regions.
“Not only do they have very similar vertebrae, but certain species, in particular porpoises and dolphins, have many more vertebrae than terrestrial mammals, with some species having close to 100 vertebrae,” said Jones. “This makes it really challenging to translate regions found in terrestrial mammals to the backbones in whales and dolphins.”
Traditional statistical methods used to identify regionalization patterns require the exact same number of elements across specimens. The statistical method Pierce and Jones implemented (called Regions) allowed them to overcome this issue by analyzing the backbone of each specimen individually. While the method worked well for the constrained backbone of terrestrial mammals, it proved computationally challenging for the high counts of vertebrae in cetaceans. Gillet collaborated with the Data Science Services team at the Harvard Institute for Quantitative Social Science to rewrite the code, allowing the program to obtain results within minutes. The researchers made the new program, called MorphoRegions, publicly available for the scientific community as a computational software R package.
“This is definitely one of the biggest advances of our study,” said Pierce. “Amandine spent months refining the program so that it could analyze a system of high repeating units without crashing the computer.”
Gillet applied the MorphoRegions method to the data she had previously collected during her Ph.D. work. She visited six museums in Europe, South Africa, and the U.S. gathering information on 139 specimens from 62 cetacean species, two-thirds of the almost 90 living species. In total, Gillet measured 7,500 vertebrae and ran them through the analytical pipeline.
Researchers propose a model where the backbone of cetaceans is divided into precaudal and caudal segments.
Credit: Amandine Gillet
“Our large data set allowed us to demonstrate that not only does the organization of the cetacean backbone differ from terrestrial mammals, but also that the patterns vary within cetaceans as we identified between six and nine regions depending on the species” said Gillet, “We then worked from there to find commonalities across regions and identified a pattern common to all cetaceans, which is summarized by our Nested Regions hypothesis.”
The hypothesis proposed by the team introduces a hierarchical organization of the backbone in which a precaudal and a caudal segment are first identified. The two segments are then each divided into several modules common to all cetaceans: cervical, anterior thoracic, thoraco-lumbar, posterior lumbar, caudal, peduncle, and fluke. Next, depending on the species, each module is further subdivided into one to four regions, with a minimum of six and a maximum of nine post-cervical regions along the backbone.
“Surprisingly, this showed us that, compared to terrestrial mammals, the precaudal segment has less regions, whereas the caudal area has more,” said Pierce. “Terrestrial mammals use their tails for a variety of different functions, but not usually for generating propulsive forces, like cetaceans do. Having more regions in the tail may allow for movement in very specific regions of the tail.”
With a better understanding of the organization of the cetacean backbone, the researchers plan to next tackle understanding how these morphological regions correlate with function using experimental data on the flexibility of the vertebral column collected in the lab. These data collected on modern taxa should allow the researchers to infer swimming abilities of fossil whales and help inform how the backbone shifted from a weight-bearing structure on land to a propulsion-generating organ in the water.
Foreign policy experts discuss likely fraught succession at kickoff of two months of events marking 75th anniversary of People’s Republic
Xi Jinping has managed to maintain his grip on power in the People’s Republic of China for longer than a decade. What will unfold when the 71-year-old president eventually steps down?
“I think it’s almost certain that he will choose a weak … successor,” said Yuhua Wang, a professor of government and one of three experts to participate in a symposium hosted by the Fairbank Center for Chinese Studies.
“The People’s Republic of China at 75” kicked off two months of lectures, discussions, and film screenings organized by the center to mark the anniversary of revolutionary leader Mao Zedong’s proclamation of a new Communist state. Moderator and Fairbank Center Faculty Director Mark Wu, the Henry L. Stimson Professor of Law, got the event started by inviting panelists to offer reflections bridging past, present, and future. The theme of leadership transition rose to the fore as Xi struggles with a wobbly post-COVID economy.
Wang, author of “The Rise and Fall of Imperial China” (2022), rang a note of optimism by first underscoring the PRC’s resilience by historic standards. Over 2,000 years, the average Chinese dynasty lasted 70 years by his calculations. “If you think about comparative communist regimes, the Soviet Union lasted for 69 years,” Wang added.
But he quickly pivoted to commonalities between Imperial China and the PRC. The quality of governance during the Imperial period depended solely on leadership — never on the health of China’s institutions, Wang emphasized. And the PRC, despite its collectivist ideologies, has failed to break that cycle.
The comparative political scientist went on to cite “the crown prince problem,” a concept that explains why strong emperors usually select heirs who threaten neither power nor life. It played out again and again in Imperial China. And it also happened following Chairman Mao’s death in 1976, Wang argued. “What I worry most in the next 25 years is exactly this succession,” he said.
Joseph Fewsmith (from left), Mark Wu, Anthony Saich, and Yuhua Wang.
Xi, who also serves as general secretary of the party and commander of the armed forces, ascended to the presidency in 2013. In 2018, just ahead of his second five-year term, he mobilized the National People’s Congress to abolish term limits enacted by former leader Deng Xiaoping amid an era of reform in the 1980s. In effect, Xi’s move returned China to the one-ruler cycle that prevailed under Mao and the centuries of emperors before him.
“Xi’s decision to extend his rule pushes succession into a very uncertain and unpredictable future,” Saich declared.
Boston University’s Joseph Fewsmith, a professor of international relations and political science, grappled in his remarks with the contested legacy of Mao himself. Fewsmith highlighted a resolution on China’s history, implemented by Deng in 1981, that was pretty tough on the PRC founder. “It made no doubt that the Cultural Revolution and Great Leap Forward were very serious mistakes,” said Fewsmith, co-author with Nancy Hearst of the 10-volume “Mao’s Road to Power.”
More than 15 years later came proposed revisions that put a more positive spin on the Maoist period. Moves to formally adopt this version of events were blocked in the late ’90s. But Xi formally accepted them in 2013, Fewsmith noted. “We have been living with this interpretation of the Maoist period ever since,” he said.
How did things go in the intervening years, as Xi consolidated power and ensured his own longevity? Fewsmith pointed to slowing economic growth, the devastation of COVID-19, and a rising tide of nationalism over Maoist political thought.
Later in the conversation he challenged characterizations of Xi as a strong leader, citing as just one bit of evidence delays to the recent Third Plenum meeting of top party officials amid urgent economic concerns.
“I think we have a very rough future in China,” Fewsmith concluded. “And I would highlight succession as probably the most critical.”
Next up, on Oct. 25, JFK Jr. Forum – A Conversation with Ambassador Kevin Rudd. On Oct. 30, New York Times correspondent Edward Wong will discuss his new book, “At The Edge of Empire,” which blends family history and his own reporting on military efforts to maintain control over China’s border regions. For a complete lineup, visit fairbank.fas.harvard.edu.
Where is generative AI already proving its worth in teaching and what are its limits? Will avatars soon replace lecturers? In this interview, Jan Vermant, Vice Rector for Curriculum Development, talks about trends at ETH and his own experiences.
When an election result is disputed, people who are skeptical about the outcome may be swayed by figures of authority who come down on one side or the other. Those figures can be independent monitors, political figures, or news organizations. However, these “debunking” efforts don’t always have the desired effect, and in some cases, they can lead people to cling more tightly to their original position.
Neuroscientists and political scientists at MIT and the University of California at Berkeley have now created a computational model that analyzes the factors that help to determine whether debunking efforts will persuade people to change their beliefs about the legitimacy of an election. Their findings suggest that while debunking fails much of the time, it can be successful under the right conditions.
For instance, the model showed that successful debunking is more likely if people are less certain of their original beliefs and if they believe the authority is unbiased or strongly motivated by a desire for accuracy. It also helps when an authority comes out in support of a result that goes against a bias they are perceived to hold: for example, Fox News declaring that Joseph R. Biden had won in Arizona in the 2020 U.S. presidential election.
“When people see an act of debunking, they treat it as a human action and understand it the way they understand human actions — that is, as something somebody did for their own reasons,” says Rebecca Saxe, the John W. Jarve Professor of Brain and Cognitive Sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study. “We’ve used a very simple, general model of how people understand other people’s actions, and found that that’s all you need to describe this complex phenomenon.”
The findings could have implications as the United States prepares for the presidential election taking place on Nov. 5, as they help to reveal the conditions that would be most likely to result in people accepting the election outcome.
MIT graduate student Setayesh Radkani is the lead author of the paper, which appears today in a special election-themed issue of the journal PNAS Nexus. Marika Landau-Wells PhD ’18, a former MIT postdoc who is now an assistant professor of political science at the University of California at Berkeley, is also an author of the study.
Modeling motivation
In their work on election debunking, the MIT team took a novel approach, building on Saxe’s extensive work studying “theory of mind” — how people think about the thoughts and motivations of other people.
As part of her PhD thesis, Radkani has been developing a computational model of the cognitive processes that occur when people see others being punished by an authority. Not everyone interprets punitive actions the same way, depending on their previous beliefs about the action and the authority. Some may see the authority as acting legitimately to punish an act that was wrong, while others may see an authority overreaching to issue an unjust punishment.
Last year, after participating in an MIT workshop on the topic of polarization in societies, Saxe and Radkani had the idea to apply the model to how people react to an authority attempting to sway their political beliefs. They enlisted Landau-Wells, who received her PhD in political science before working as a postdoc in Saxe’s lab, to join their effort, and Landau suggested applying the model to debunking of beliefs regarding the legitimacy of an election result.
The computational model created by Radkani is based on Bayesian inference, which allows the model to continually update its predictions of people’s beliefs as they receive new information. This approach treats debunking as an action that a person undertakes for his or her own reasons. People who observe the authority’s statement then make their own interpretation of why the person said what they did. Based on that interpretation, people may or may not change their own beliefs about the election result.
Additionally, the model does not assume that any beliefs are necessarily incorrect or that any group of people is acting irrationally.
“The only assumption that we made is that there are two groups in the society that differ in their perspectives about a topic: One of them thinks that the election was stolen and the other group doesn’t,” Radkani says. “Other than that, these groups are similar. They share their beliefs about the authority — what the different motives of the authority are and how motivated the authority is by each of those motives.”
The researchers modeled more than 200 different scenarios in which an authority attempts to debunk a belief held by one group regarding the validity of an election outcome.
Each time they ran the model, the researchers altered the certainty levels of each group’s original beliefs, and they also varied the groups’ perceptions of the motivations of the authority. In some cases, groups believed the authority was motivated by promoting accuracy, and in others they did not. The researchers also altered the groups’ perceptions of whether the authority was biased toward a particular viewpoint, and how strongly the groups believed in those perceptions.
Building consensus
In each scenario, the researchers used the model to predict how each group would respond to a series of five statements made by an authority trying to convince them that the election had been legitimate. The researchers found that in most of the scenarios they looked at, beliefs remained polarized and in some cases became even further polarized. This polarization could also extend to new topics unrelated to the original context of the election, the researchers found.
However, under some circumstances, the debunking was successful, and beliefs converged on an accepted outcome. This was more likely to happen when people were initially more uncertain about their original beliefs.
“When people are very, very certain, they become hard to move. So, in essence, a lot of this authority debunking doesn’t matter,” Landau-Wells says. “However, there are a lot of people who are in this uncertain band. They have doubts, but they don’t have firm beliefs. One of the lessons from this paper is that we’re in a space where the model says you can affect people’s beliefs and move them towards true things.”
Another factor that can lead to belief convergence is if people believe that the authority is unbiased and highly motivated by accuracy. Even more persuasive is when an authority makes a claim that goes against their perceived bias — for instance, Republican governors stating that elections in their states had been fair even though the Democratic candidate won.
As the 2024 presidential election approaches, grassroots efforts have been made to train nonpartisan election observers who can vouch for whether an election was legitimate. These types of organizations may be well-positioned to help sway people who might have doubts about the election’s legitimacy, the researchers say.
“They’re trying to train to people to be independent, unbiased, and committed to the truth of the outcome more than anything else. Those are the types of entities that you want. We want them to succeed in being seen as independent. We want them to succeed as being seen as truthful, because in this space of uncertainty, those are the voices that can move people toward an accurate outcome,” Landau-Wells says.
The research was funded, in part, by the Patrick J. McGovern Foundation and the Guggenheim Foundation.
Scientists at MIT and the University of California at Berkeley have created a computational model that analyzes the factors that help to determine whether debunking efforts will persuade people to change their beliefs about the legitimacy of an election.
EPFL and ETH Zurich Presidents Martin Vetterli and Joël Mesot consider high tuition fees as in the English-speaking world to be the wrong approach to improving the financial situation of the two universities. Students should be seen as success factors for our country and not as cash cows.
The significant funding commitment will enable world-class discovery science, unlocking new insights into how cancers develop, grow and spread, as well as examining how the immune system can be harnessed to combat the disease.
Research at the CRUK Cambridge Institute focuses on understanding every stage of the cancer life cycle – how tumours grow and spread and how this is impacted by the characteristics of each individual patient. By studying how tumours develop, adapt, and interact with their surroundings, scientists aim to uncover crucial insights into their behaviour.
Vice-Chancellor of the University of Cambridge, Professor Deborah Prentice, said: “From understanding and detecting cancer at its very earliest stages, to developing kinder treatments to building Cambridge Cancer Research Hospital, a transformative new cancer research hospital for the region, Cambridge is changing the story of cancer. For many years now, Cancer Research UK has played a vital role in enabling this world-leading work. Today’s announcement will ensure our researchers continue to find new ways to transform the lives of patients locally, nationally and internationally.”
Today’s £173 million announcement further boosts CRUK’s unwavering commitment towards its mission to beat cancer. The charity is investing in exciting new research programmes, forging new partnerships and is on track to invest more than £1.5bn on research over the five-year period 2021/22 to 2025/26.
Director of the CRUK Cambridge Institute, Professor Greg Hannon, said: “In a golden era for life sciences, this funding bolsters Cambridge as a major global hub for cancer research on an increasingly competitive worldwide stage and will greatly aid the recruitment of top-tier international talent.
“Research from the Institute has already made a positive impact for patients and their families, from the development of innovative technologies, diagnostic tests, and advanced imaging methods to the roll out of personalised medicine programmes for those with brain, breast, pancreatic, and ovarian cancers. We believe that only by embracing the complexity of cancer and how the disease interacts with the normal cells of patients can we move the needle on the hardest to treat cancers.”
The Institute is dedicated to improving cancer patients’ lives through discovery science and clinical translational research and has over 300 scientists working on groundbreaking discoveries taking research from laboratory bench to bedside.
Established in 2007, it was the first major new cancer research centre in the UK for over 50 years. In 2013, it became a department of the University of Cambridge School of Clinical Medicine, strengthening links with researchers across the University and at Addenbrooke's Hospital, and further enhancing its position as a world leader with research transitioning into clinical trials, and ultimately new and better cancer treatments.
Professor Hannon added: "The Institute serves as a foundation for the entire Cambridge cancer research community through access to cutting-edge equipment and technical expertise. Only through understanding all aspects of the disease can we prevent, detect and treat cancer so that everybody can lead longer, better lives, free from fear of cancer.
“With this new funding, the Institute aims to accelerate its impact for patients, with new schemes to integrate clinicians into every aspect of our research and to embrace new technologies, including the promise of machine learning and artificial intelligence to enhance our discovery portfolio.”
The award, which will support the Institute over the next seven years, follows a comprehensive review of the facility led by an independent panel of international cancer experts who recognised research innovation.
CRUK Chief Executive, Michelle Mitchell, said: “We are delighted to confirm this incredible investment which is a reflection of the world-leading research community at the CRUK Cambridge Institute. The funding will underpin long-term cutting-edge discovery research, as well as supporting researchers to find new ways to improve cancer prevention and treatment, while creating innovative solutions to diagnose the disease earlier.
“This kind of funding would not be possible without the generosity of Cancer Research UK supporters and philanthropists."
Work undertaken at the Institute includes:
Understanding cancer: By gaining a deeper understanding of how tumours grow, adapt, and interact with their surroundings, scientists hope to uncover why some cells become cancerous and learn how each tumour's lifecycle can affect a patient’s response to treatment and prognosis. Professor Greg Hannon's team developed a diagnostic tool using virtual reality to explore every cell and aspect of breast tumours in unprecedented detail.
Unravelling tumour interactions: Researchers are investigating a tumours’ ‘microenvironment' – which includes the surrounding cells, blood vessels, and immune cells and how they interact. This is helping scientists to predict how well immunotherapy treatments will work.
Cancer detection: Scientists are finding new ways to detect cancer earlier, predict the best course of treatments and tailor therapies to individual needs, to improve survival. Using tumour DNA, scientists can monitor the effectiveness of treatments and catch signs of cancer returning. Cambridge scientists are also working on a simple at-home test for future patients to regularly monitor their progress.
Personalised medicine: Looking at the unique genetic mutations of a person’s tumour, including how it behaves and responds to treatment, allows treatments to be developed and matched to the specific genetic change. For example, Professor James Brenton's team discovered a specific mutation in the most common form of ovarian cancer which is now used across the NHS as a cancer marker to measure treatment response for the disease.
Thanks to research, cancer death rates have fallen by 10% percent in the UK over the past decade. But in the East of England, around 37,400 people are still diagnosed, and around 15,700 lose their lives to the disease every year - underlining the vital need for new and better treatments.
Major studies seeking more accurate treatments for the deadliest cancers like ovarian and oesophageal cancer will also be supported at the Institute. Research undertaken by Professor Florian Markowetz and his team includes predicting cancer weaknesses to treatment, and spotting cancers as early as possible using AI technology.
There are 17 research groups based at the Institute – based on the largest biomedical campus in Europe - studying a range of cancer and technologies to support improved cancer treatments.
Adapted from a press release from Cancer Research UK
Cancer Research UK (CRUK) has today announced a £173 million investment in its institute at the University of Cambridge - the largest single grant ever awarded by the charity outside of London.
Today’s announcement will ensure our researchers continue to find new ways to transform the lives of patients locally, nationally and internationally
By Assoc Prof Wee Hwee Lin, Director of the Centre for Health Interventions and Policy Evaluation Research, Saw Swee Hock School of Public Health at NUS
Active electronics — components that can control electrical signals — usually contain semiconductor devices that receive, store, and process information. These components, which must be made in a clean room, require advanced fabrication technology that is not widely available outside a few specialized manufacturing centers.
During the Covid-19 pandemic, the lack of widespread semiconductor fabrication facilities was one cause of a worldwide electronics shortage, which drove up costs for consumers and had implications in everything from economic growth to national defense. The ability to 3D print an entire, active electronic device without the need for semiconductors could bring electronics fabrication to businesses, labs, and homes across the globe.
While this idea is still far off, MIT researchers have taken an important step in that direction by demonstrating fully 3D-printed resettable fuses, which are key components of active electronics that usually require semiconductors.
The researchers’ semiconductor-free devices, which they produced using standard 3D printing hardware and an inexpensive, biodegradable material, can perform the same switching functions as the semiconductor-based transistors used for processing operations in active electronics.
Although still far from achieving the performance of semiconductor transistors, the 3D-printed devices could be used for basic control operations like regulating the speed of an electric motor.
“This technology has real legs. While we cannot compete with silicon as a semiconductor, our idea is not to necessarily replace what is existing, but to push 3D printing technology into uncharted territory. In a nutshell, this is really about democratizing technology. This could allow anyone to create smart hardware far from traditional manufacturing centers,” says Luis Fernando Velásquez-García, a principal research scientist in MIT’s Microsystems Technology Laboratories (MTL) and senior author of a paper describing the devices, which appears in Virtual and Physical Prototyping.
He is joined on the paper by lead author Jorge Cañada, an electrical engineering and computer science graduate student.
An unexpected project
Semiconductors, including silicon, are materials with electrical properties that can be tailored by adding certain impurities. A silicon device can have conductive and insulating regions, depending on how it is engineered. These properties make silicon ideal for producing transistors, which are a basic building block of modern electronics.
However, the researchers didn’t set out to 3D-print semiconductor-free devices that could behave like silicon-based transistors.
This project grew out of another in which they were fabricating magnetic coils using extrusion printing, a process where the printer melts filament and squirts material through a nozzle, fabricating an object layer-by-layer.
They saw an interesting phenomenon in the material they were using, a polymer filament doped with copper nanoparticles.
If they passed a large amount of electric current into the material, it would exhibit a huge spike in resistance but would return to its original level shortly after the current flow stopped.
This property enables engineers to make transistors that can operate as switches, something that is typically only associated with silicon and other semiconductors. Transistors, which switch on and off to process binary data, are used to form logic gates which perform computation.
“We saw that this was something that could help take 3D printing hardware to the next level. It offers a clear way to provide some degree of ‘smart’ to an electronic device,” Velásquez-García says.
The researchers tried to replicate the same phenomenon with other 3D printing filaments, testing polymers doped with carbon, carbon nanotubes, and graphene. In the end, they could not find another printable material that could function as a resettable fuse.
They hypothesize that the copper particles in the material spread out when it is heated by the electric current, which causes a spike in resistance that comes back down when the material cools and the copper particles move closer together. They also think the polymer base of the material changes from crystalline to amorphous when heated, then returns to crystalline when cooled down — a phenomenon known as the polymeric positive temperature coefficient.
“For now, that is our best explanation, but that is not the full answer because that doesn’t explain why it only happened in this combination of materials. We need to do more research, but there is no doubt that this phenomenon is real,” he says.
3D-printing active electronics
The team leveraged the phenomenon to print switches in a single step that could be used to form semiconductor-free logic gates.
The devices are made from thin, 3D-printed traces of the copper-doped polymer. They contain intersecting conductive regions that enable the researchers to regulate the resistance by controlling the voltage fed into the switch.
While the devices did not perform as well as silicon-based transistors, they could be used for simpler control and processing functions, such as turning a motor on and off. Their experiments showed that, even after 4,000 cycles of switching, the devices showed no signs of deterioration.
But there are limits to how small the researchers can make the switches, based on the physics of extrusion printing and the properties of the material. They could print devices that were a few hundred microns, but transistors in state-of-the-art electronics are only few nanometers in diameter.
“The reality is that there are many engineering situations that don’t require the best chips. At the end of the day, all you care about is whether your device can do the task. This technology is able to satisfy a constraint like that,” he says.
However, unlike semiconductor fabrication, their technique uses a biodegradable material and the process uses less energy and produces less waste. The polymer filament could also be doped with other materials, like magnetic microparticles that could enable additional functionalities.
In the future, the researchers want to use this technology to print fully functional electronics. They are striving to fabricate a working magnetic motor using only extrusion 3D printing. They also want to finetune the process so they could build more complex circuits and see how far they can push the performance of these devices.
“This paper demonstrates that active electronic devices can be made using extruded polymeric conductive materials. This technology enables electronics to be built into 3D printed structures. An intriguing application is on-demand 3D printing of mechatronics on board spacecraft,” says Roger Howe, the William E. Ayer Professor of Engineering, Emeritus, at Stanford University, who was not involved with this work.
This work is funded, in part, by Empiriko Corporation.
The devices are made from thin, 3D-printed traces of the copper-doped polymer. They contain intersecting conductive regions that enable the researchers to regulate the resistance by controlling the voltage fed into the switch.
Penn Vet researchers have revealed a connection between NF-κB signaling pathways and X chromosome inactivation, which has implications for understanding sex-based immune responses during infection.
A new technology enables the control of specific brain circuits non-invasively with magnetic fields, according to a preclinical study from researchers at Weill Cornell Medicine, the Rockefeller University and the Icahn School of Medicine at Mount Sinai.
Associate Professor Aleks Kissinger has co-authored the book ‘Picturing Quantum Software’ with John van de Wetering, an Assistant Professor in Theoretical Computer Science at the Informatics Institute, University of Amsterdam and it is currently free for all to download.
The University of Melbourne in partnership with Ashoka University, Universitas Gadjah Mada, Pontificia Universidad Católica de Chile, Mahidol University, University of Manchester, University of Nairobi and the University of Toronto has established the Global Humanities Alliance (GHA), an initiative that aims to raise the profile and social and political impact of the humanities and social sciences globally.
From Tuesday 15 October the University of Cambridge’s Museum of Zoology is offering visitors a unique experience: the chance to chat with the animals on display – whether skeletal, taxidermy, or extinct.
In a collaboration with the company Nature Perspectives, the Museum’s Assistant Director Jack Ashby has chosen a range of animal specimens to bring back to life using generative Artificial Intelligence.
Visitors can pose their questions to thirteen specimens - including dodo and whale skeletons, a taxidermied red panda, and a preserved cockroach - by scanning QR codes that open a chat-box on their mobile phone. In two-way conversations, which can be voice- or text-based, the specimens will answer as if they are still alive.
This is believed to be the first time a museum has used generative Artificial Intelligence to enable visitors to chat with objects on display in this way.
By analysing data from the conversations, the team hopes that the month-long experiment will help them learn more about how AI can help the public to better engage with nature, and about the potential for AI in museums. It will also provide the museum with new insights into what visitors really want to know about the specimens on display.
Nature Perspectives uses AI to enable cultural institutions like the Museum of Zoology to engage the public through these unique conversational experiences. The company aims to reverse a growing apathy towards biodiversity loss by enabling new ways to engage with the natural world.
“This is an amazing opportunity for people to test out an emerging technology in our inspiring Museum setting, and we also hope to learn something about how our visitors see the animals on display,” said Jack Ashby, Assistant Director of the University of Cambridge’s Museum of Zoology.
He added: “Our whole purpose is to get people engaged with the natural world. So we're curious to see whether this will work, and whether chatting to the animals will change people’s attitudes towards them - will the cockroach be better liked, for example, as a result of having its voice heard?”
“By using AI to simulate non-human perspectives, our technology offers a novel way for audiences to connect with the natural world,” said Gal Zanir, co-founder of the company Nature Perspectives, which developed the AI technology for the experience.
He added: “One of the most magical aspects of the simulations is that they’re age-adaptive. For the first time, visitors of all ages will be able to ask the specimens anything they like.”
The technology brings together all available information on each animal involved – including details particular to the individual specimens such as where they came from and how they were prepared for display in the museum. This is all repackaged from a first-person perspective, so that visitors can experience realistic, meaningful conversations.
The animals will adjust their tone and language to suit the age of the person they’re talking to. And they’re multi-lingual - speaking over 20 languages including Spanish and Japanese so that visitors can chat in their native languages.
The team has chosen a range of specimens that include skeletons, taxidermy, models, and whole preserved animals. The specimens are: dodo skeleton, narwhal skeleton, brain coral, red admiral butterfly, fin whale skeleton, American cockroach, huia taxidermy (a recently extinct bird from New Zealand), red panda taxidermy, freeze-dried platypus, giant sloth fossil skeleton, giant deer skull and antlers, mallard taxidermy, and Ichthyostega model (an extinct ancestor of all animals with four legs).
Nature Perspectives was created by a team of graduates from the University of Cambridge’s Masters in Conservation Leadership programme, who noticed that people seem to feel more connected to machines when they can talk to them. This inspired the team to apply the same principle to nature - giving nature a voice to promote its agency and foster deeper, more personal connections between people and the natural world.
“Artificial Intelligence is opening up exciting new opportunities to connect people with non-human life, but the impacts need to be carefully studied. I’m delighted to be involved in exploring how the Nature Perspectives pilot affects the way people feel about and understand the species they ‘meet’ in the Museum of Zoology,” said Professor Chris Sandbrook, Director of the University of Cambridge’s Masters in Conservation Leadership programme.
“Enabling museums to engage visitors with the simulated perspectives of exhibits is only the first step for Nature Perspectives. We aim to apply this transformative approach widely, from public engagement and education to scientific research, to representing nature in legal processes, policy-making and beyond," said Zanir.
The Nature Perspectives AI experiment runs for one month, from 15th October to 15th of November 2024. For visiting times see www.museum.zoo.cam.ac.uk/visit-us
Specimens in a Cambridge museum will be brought to life through the power of Artificial Intelligence, by a team aiming to strengthen our connection with the natural world and reverse apathy towards biodiversity loss.
This is an amazing opportunity for people to test out an emerging technology in our inspiring Museum setting.
Supported by PURM, second-year Ziana Sundrani and third-year Taiwo Adeaga worked in the Infant Language Center over the summer on a project exploring how infants figure out which things are words.
Unearthed papyrus contains lost scenes from Euripides’ plays
Eileen O’Grady
Harvard Staff Writer
7 min read
Alums help identify, decipher ‘one of the most significant new finds in Greek literature in this century’
For centuries, questions have loomed about two of Euripides’ lesser-known tragedies, “Ino” and “Polyidus,” with only a smattering of text fragments and plot summaries available to offer glimpses into their narratives.
Now, in a groundbreaking find, two Harvard alumni have identified and worked to decipher 97 lines from these plays on a papyrus from the third century A.D.
Yvona Trnka-Amrhein ’06, Ph.D. ’13, an assistant professor at the University of Colorado, Boulder, was the first to identify part of the text as an excerpt from “Polyidus,” a scene in which King Minos of Crete confronts a seer, demanding he resurrect his son. Trnka-Amrhein and colleague John Gibert, Ph.D. ’91, identified the remaining text as lines from “Ino,” a scene that probably depicts the title character boasting victoriously after orchestrating the deaths of her stepchildren. Their research was published this month in Zeitschrift für Papyrologie und Epigraphik, or the Journal of Papyrology and Epigraphy.
The papyrus as it was uncovered at the ancient necropolis.
Photo courtesy of Yvona Trnka-Amrhein
The papyrus was discovered in 2022 in a burial shaft at the ancient necropolis of Philadelphia, Egypt, by a team from the Egyptian Ministry of Antiquities. In June, Harvard’s Center for Hellenic Studies hosted a conference with Trnka-Amrhein, Gibert, excavation team leader Basem Gehad, and 12 other scholars from around the world to compare research, including Harvard Professor of the Classics Naomi Weiss. Classics Ph.D. candidate Sarah Gonzalez also participated.
“It’s arguably one of the most significant new finds in Greek literature in this century,” Weiss said. “I don’t expect there to be another find like this in my lifetime, in my particular field of expertise. For Harvard’s Center for Hellenic Studies to host the first public investigation into this material was really exciting.”
Weiss, whose research focuses on ancient Greek performance culture, especially classical Greek drama, discussed the significance of the finding. The following transcript has been edited for length and clarity.
Tell me about your experience at the New Euripides Conference?
That was a once-in-a-lifetime experience for a scholar of Greek tragedy. To be one of 15 scholars worldwide who got to see this stuff for the first time was really incredible. There was a core group of people there who were literally going through the fragments word by word and discussing whether Yvona and John’s readings of each part of the papyrus were correct. Some of it is hard to read, so individual words are contested.
What are your early takeaways from the new fragment of “Polyidus”?
It quite clearly seems to be a dialogue between King Minos of Crete and Polyidus the seer, where Polyidus is saying, “It’s wrong to demand that I revive your son from the dead, that goes against all laws of nature,” and Minos is basically saying, “Well, I’m king and what a tyrant asks for has to happen.” The passage seems to be really concerned with questions of tyranny and the extent of human power and free will, and how those can jostle against each other. How far can human power go, how far can human knowledge and skill extend? Even to the point of reviving someone from the dead? The fact that Polyidus does end up reviving the dead son is an example of how Euripides liked to play with plot twists and “happily ever after” endings. At the same time, he was deeply engaged with contemporary intellectual questions.
“These fragments are unusual because they’re relatively long and give us a lot of information about plays that we previously knew less about. ”
Naomi Weiss
Naomi Weiss.
Photo by Jodi Hilton
And the new fragment of “Ino”?
If the editors’ reconstruction is correct, then “Ino” is the only surviving fragment that has a dialogue between the two wives: Ino, the king’s first wife who was long presumed missing and has returned in disguise; and Themisto, the second wife. Each wife has two children. Themisto tries to kill Ino’s, but Ino tricks her into killing her own instead. Themisto commits suicide and then the king mistakenly kills one of his sons by Ino, and she walks into the sea with the other. In this excerpt, the meeting of the two wives brings to the fore the doubleness and repetition running throughout the play and it in turn makes us better appreciate quite how excessive this tragedy was, with multiple wives, multiple children, multiple deaths, multiple suicides.
Does this change anything about our understanding of Euripides?
“Ino” is a really gruesome tragedy. The only person left at the end is the king, and he’s lost all his wives and all his children. This seems like tragedy on steroids. It’s the sort of experimentation with how far you can push a tragic plot that may remind us of later plays of Euripides. “Ino” may well be a significantly earlier play — there seems to be a reference to it in a comedy by Aristophanes that was produced in 425 B.C. If that is a reference to “Ino,” we know the tragedy was performed before this date. We tend to think of Euripides’ super experimental plays as being from the last decade of his career, where he’s just going all out and questioning the very form of tragedy. If we are right in dating this earlier, then that changes our understanding of how tragedy developed through the fifth century.
Who might have written the excerpts on this papyrus, and why?
We don’t know. It’s a really open question. At the conference, one of the questions that kept coming up was, is it significant that these two plays, which have something to do with the death of children, were found in a pit grave where there were buried — at different times — the body of an older woman and the body of a child? But it’s very hard to make any reliable conjectures about that connection. Some people at the conference thought that these extracts may be part of what’s called the “anthology tradition”: Maybe someone was teaching Euripides’ plays or hoping to draw from them in their own compositions and compiled a set of useful passages from each tragedy. Another scholar at the conference thought that maybe these were written out to be part of a performance, essentially like a script for actors. All of these questions remain and will be debated.
How much do we know about Euripides’ work as a whole?
When we think of Greek tragedy, we tend to think of the “big three”: Aeschylus, Sophocles, and Euripides, who all wrote tragedies in the fifth century B.C. Of these three tragedians, we have much more surviving of Euripides. While we have seven full plays by Aeschylus and Sophocles, for Euripides, we have 19 full plays and 18 of those can reliably be said to be his. Then we have a lot of fragments. The fragments of plays are preserved across different media — a lot of them are quotations that come up in other authors, but we also have papyri. This is the latest find of tragedy on papyri. These fragments are unusual because they’re relatively long and give us a lot of information about plays that we previously knew less about.
Volatiles are elements or compounds that change into vapour at relatively low temperatures. They include the six most common elements found in living organisms, as well as water. The zinc found in meteorites has a unique composition, which can be used to identify the sources of Earth’s volatiles.
The researchers, from the University of Cambridge and Imperial College London, have previously found that Earth’s zinc came from different parts of our Solar System: about half came from beyond Jupiter and half originated closer to Earth.
“One of the most fundamental questions on the origin of life is where the materials we need for life to evolve came from,” said Dr Rayssa Martins from Cambridge’s Department of Earth Sciences. “If we can understand how these materials came to be on Earth, it might give us clues to how life originated here, and how it might emerge elsewhere.”
Planetesimals are the main building blocks of rocky planets, such as Earth. These small bodies are formed through a process called accretion, where particles around a young star start to stick together, and form progressively larger bodies.
But not all planetesimals are made equal. The earliest planetesimals that formed in the Solar System were exposed to high levels of radioactivity, which caused them to melt and lose their volatiles. But some planetesimals formed after these sources of radioactivity were mostly extinct, which helped them survive the melting process and preserved more of their volatiles.
In a study published in the journal Science Advances, Martins and her colleagues looked at the different forms of zinc that arrived on Earth from these planetesimals. The researchers measured the zinc from a large sample of meteorites originating from different planetesimals and used this data to model how Earth got its zinc, by tracing the entire period of the Earth’s accretion, which took tens of millions of years.
Their results show that while these ‘melted’ planetesimals contributed about 70% of Earth’s overall mass, they only provided around 10% of its zinc.
According to the model, the rest of Earth’s zinc came from materials that didn’t melt and lose their volatile elements. Their findings suggest that unmelted, or ‘primitive’ materials were an essential source of volatiles for Earth.
“We know that the distance between a planet and its star is a determining factor in establishing the necessary conditions for that planet to sustain liquid water on its surface,” said Martins, the study’s lead author. “But our results show there’s no guarantee that planets incorporate the right materials to have enough water and other volatiles in the first place – regardless of their physical state.”
The ability to trace elements through millions or even billions of years of evolution could be a vital tool in the search for life elsewhere, such as on Mars, or on planets outside our Solar System.
“Similar conditions and processes are also likely in other young planetary systems,” said Martins. “The roles these different materials play in supplying volatiles is something we should keep in mind when looking for habitable planets elsewhere.”
The research was supported in part by Imperial College London, the European Research Council, and UK Research and Innovation (UKRI).
Reference:
Rayssa Martins et al. ‘Primitive asteroids as a major source of terrestrial volatiles.’ Science Advances (2024). DOI: 10.1126/sciadv.ado4121
Researchers have used the chemical fingerprints of zinc contained in meteorites to determine the origin of volatile elements on Earth. The results suggest that without ‘unmelted’ asteroids, there may not have been enough of these compounds on Earth for life to emerge.
A study led by researchers at Weill Cornell Medicine and the New York Genome Center found that antiviral enzymes that mutate the DNA of normal and cancer cells are key promoters of early bladder cancer development.
The Cambridge Conservation Initiative and the University of Cambridge Institute for Sustainability Leadership (CISL) co-hosted a panel discussion featuring key industry leaders in the run-up to the 16th Conference of the Parties to the Convention on Biological Diversity (CBD COP16). Please read more about the panel here
Glitter is a major pollutant, with the microplastics commonly found in sewage sludge and wastewater, but nowresearchers have identified a type of sustainable glitter that has no impact on the environment.
A classical way to image nanoscale structures in cells is with high-powered, expensive super-resolution microscopes. As an alternative, MIT researchers have developed a way to expand tissue before imaging it — a technique that allows them to achieve nanoscale resolution with a conventional light microscope.
In the newest version of this technique, the researchers have made it possible to expand tissue 20-fold in a single step. This simple, inexpensive method could pave the way for nearly any biology lab to perform nanoscale imaging.
“This democratizes imaging,” says Laura Kiessling, the Novartis Professor of Chemistry at MIT and a member of the Broad Institute of MIT and Harvard and MIT’s Koch Institute for Integrative Cancer Research. “Without this method, if you want to see things with a high resolution, you have to use very expensive microscopes. What this new technique allows you to do is see things that you couldn’t normally see with standard microscopes. It drives down the cost of imaging because you can see nanoscale things without the need for a specialized facility.”
At the resolution achieved by this technique, which is around 20 nanometers, scientists can see organelles inside cells, as well as clusters of proteins.
“Twenty-fold expansion gets you into the realm that biological molecules operate in. The building blocks of life are nanoscale things: biomolecules, genes, and gene products,” says Edward Boyden, the Y. Eva Tan Professor in Neurotechnology at MIT; a professor of biological engineering, media arts and sciences, and brain and cognitive sciences; a Howard Hughes Medical Institute investigator; and a member of MIT’s McGovern Institute for Brain Research and Koch Institute for Integrative Cancer Research.
Boyden and Kiessling are the senior authors of the new study, which appears today in Nature Methods. MIT graduate student Shiwei Wang and Tay Won Shin PhD ’23 are the lead authors of the paper.
A single expansion
Boyden’s lab invented expansion microscopy in 2015. The technique requires embedding tissue into an absorbent polymer and breaking apart the proteins that normally hold tissue together. When water is added, the gel swells and pulls biomolecules apart from each other.
The original version of this technique, which expanded tissue about fourfold, allowed researchers to obtain images with a resolution of around 70 nanometers. In 2017, Boyden’s lab modified the process to include a second expansion step, achieving an overall 20-fold expansion. This enables even higher resolution, but the process is more complicated.
“We’ve developed several 20-fold expansion technologies in the past, but they require multiple expansion steps,” Boyden says. “If you could do that amount of expansion in a single step, that could simplify things quite a bit.”
With 20-fold expansion, researchers can get down to a resolution of about 20 nanometers, using a conventional light microscope. This allows them see cell structures like microtubules and mitochondria, as well as clusters of proteins.
In the new study, the researchers set out to perform 20-fold expansion with only a single step. This meant that they had to find a gel that was both extremely absorbent and mechanically stable, so that it wouldn’t fall apart when expanded 20-fold.
To achieve that, they used a gel assembled from N,N-dimethylacrylamide (DMAA) and sodium acrylate. Unlike previous expansion gels that rely on adding another molecule to form crosslinks between the polymer strands, this gel forms crosslinks spontaneously and exhibits strong mechanical properties. Such gel components previously had been used in expansion microscopy protocols, but the resulting gels could expand only about tenfold. The MIT team optimized the gel and the polymerization process to make the gel more robust, and to allow for 20-fold expansion.
To further stabilize the gel and enhance its reproducibility, the researchers removed oxygen from the polymer solution prior to gelation, which prevents side reactions that interfere with crosslinking. This step requires running nitrogen gas through the polymer solution, which replaces most of the oxygen in the system.
Once the gel is formed, select bonds in the proteins that hold the tissue together are broken and water is added to make the gel expand. After the expansion is performed, target proteins in tissue can be labeled and imaged.
“This approach may require more sample preparation compared to other super-resolution techniques, but it’s much simpler when it comes to the actual imaging process, especially for 3D imaging,” Shin says. “We document the step-by-step protocol in the manuscript so that readers can go through it easily.”
Imaging tiny structures
Using this technique, the researchers were able to image many tiny structures within brain cells, including structures called synaptic nanocolumns. These are clusters of proteins that are arranged in a specific way at neuronal synapses, allowing neurons to communicate with each other via secretion of neurotransmitters such as dopamine.
In studies of cancer cells, the researchers also imaged microtubules — hollow tubes that help give cells their structure and play important roles in cell division. They were also able to see mitochondria (organelles that generate energy) and even the organization of individual nuclear pore complexes (clusters of proteins that control access to the cell nucleus).
Wang is now using this technique to image carbohydrates known as glycans, which are found on cell surfaces and help control cells’ interactions with their environment. This method could also be used to image tumor cells, allowing scientists to glimpse how proteins are organized within those cells, much more easily than has previously been possible.
The researchers envision that any biology lab should be able to use this technique at a low cost since it relies on standard, off-the-shelf chemicals and common equipment such confocal microscopes and glove bags, which most labs already have or can easily access.
“Our hope is that with this new technology, any conventional biology lab can use this protocol with their existing microscopes, allowing them to approach resolution that can only be achieved with very specialized and costly state-of-the-art microscopes,” Wang says.
The research was funded, in part, by the U.S. National Institutes of Health, an MIT Presidential Graduate Fellowship, U.S. National Science Foundation Graduate Research Fellowship grants, Open Philanthropy, Good Ventures, the Howard Hughes Medical Institute, Lisa Yang, Ashar Aziz, and the European Research Council.
Thanks to a new technique that allows them to expand tissue 20-fold before imaging it, MIT researchers used a conventional light microscope to generate high-resolution images of synapses (left) and microtubules (right). In the image at left, presynaptic proteins are labeled in red, and postsynaptic proteins are labeled in blue. Each blue-red “sandwich” represents a synapse.
This galaxy is one hundred times smaller than the Milky Way, but is surprisingly mature for so early in the universe. Like a large city, this galaxy has a dense collection of stars at its core but becomes less dense in the galactic ‘suburbs’. And like a large city, this galaxy is starting to sprawl, with star formation accelerating in the outskirts.
This is the earliest-ever detection of inside-out galactic growth. Until Webb, it had not been possible to study galaxy growth so early in the universe’s history. Although the images obtained with Webb represent a snapshot in time, the researchers, led by the University of Cambridge, say that studying similar galaxies could help us understand how they transform from clouds of gas into the complex structures we observe today. The results are reported in the journal Nature Astronomy.
“The question of how galaxies evolve over cosmic time is an important one in astrophysics,” said co-lead author Dr Sandro Tacchella from Cambridge’s Cavendish Laboratory. “We’ve had lots of excellent data for the last ten million years and for galaxies in our corner of the universe, but now with Webb, we can get observational data from billions of years back in time, probing the first billion years of cosmic history, which opens up all kinds of new questions.”
The galaxies we observe today grow via two main mechanisms: either they pull in, or accrete, gas to form new stars, or they grow by merging with smaller galaxies. Whether different mechanisms were at work in the early universe is an open question which astronomers are hoping to address with Webb.
“You expect galaxies to start small as gas clouds collapse under their own gravity, forming very dense cores of stars and possibly black holes,” said Tacchella. “As the galaxy grows and star formation increases, it’s sort of like a spinning figure skater: as the skater pulls in their arms, they gather momentum, and they spin faster and faster. Galaxies are somewhat similar, with gas accreting later from larger and larger distances spinning the galaxy up, which is why they often form spiral or disc shapes.”
This galaxy, observed as part of the JADES (JWST Advanced Extragalactic Survey) collaboration, is actively forming stars in the early universe. It has a highly dense core, which despite its relatively young age, is of a similar density to present-day massive elliptical galaxies, which have 1000 times more stars. Most of the star formation is happening further away from the core, with a star-forming ‘clump’ even further out.
The star formation activity is strongly rising toward the outskirts, as the star formation spreads out and the galaxy grows. This type of growth had been predicted with theoretical models, but with Webb, it is now possible to observe it.
“One of the many reasons that Webb is so transformational to us as astronomers is that we’re now able to observe what had previously been predicted through modelling,” said co-author William Baker, a PhD student at the Cavendish. “It’s like being able to check your homework.”
Using Webb, the researchers extracted information from the light emitted by the galaxy at different wavelengths, which they then used to estimate the number of younger stars versus older stars, which is converted into an estimate of the stellar mass and star formation rate.
Because the galaxy is so compact, the individual images of the galaxy were ‘forward modelled’ to take into account instrumental effects. Using stellar population modelling that includes prescriptions for gas emission and dust absorption, the researchers found older stars in the core, while the surrounding disc component is undergoing very active star formation. This galaxy doubles its stellar mass in the outskirts roughly every 10 million years, which is very rapid: the Milky Way galaxy doubles its mass only every 10 billion years.
The density of the galactic core, as well as the high star formation rate, suggest that this young galaxy is rich with the gas it needs to form new stars, which may reflect different conditions in the early universe.
“Of course, this is only one galaxy, so we need to know what other galaxies at the time were doing,” said Tacchella. “Were all galaxies like this one? We’re now analysing similar data from other galaxies. By looking at different galaxies across cosmic time, we may be able to reconstruct the growth cycle and demonstrate how galaxies grow to their eventual size today.”
Astronomers have used the NASA/ESA James Webb Space Telescope (JWST) to observe the ‘inside-out’ growth of a galaxy in the early universe, only 700 million years after the Big Bang.
Mr Lim Zhi Rong, Director (HR Operations) at the NUS Office of Human Resources, has been accorded the Master Professional (IHRP-MP) certification by the Institute for Human Resource Professionals (IHRP) – a recognition that is awarded to top HR leaders in multinational corporations, the public sector, and small and medium-sized enterprises for their significant contributions to the HR industry.
In his 18-year career in HR, Mr Lim held global, regional and country leadership roles across various sectors including Financial, Fast Moving Consumer Goods and Technology, before joining NUS in February 2024. Serving on the IHRP-Professional Practices Committee since 2018, he leads the development of the IHRP Certification Framework, which includes setting standards and developing frameworks for course accreditation and continuing professional development.
Mr Lim received the IHRP-MP certification from Mr Zaqy Mohamad, Senior Minister of State for Defence and Manpower at the annual IHRP event, People Behind People, on 10 October 2024. Nominated by the tripartite partners—Ministry of Manpower, NTUC, and the Singapore National Employers Federation—he is among only 46 IHRP MPs in Singapore.
Reflecting on his achievement, Mr Lim said, “I’m deeply passionate about creating a positive and inclusive workplace culture that fosters employee growth and success. Receiving the IHRP-MP certification is a great honour, and I’m encouraged to continue to contribute meaningfully to the HR community and maintain the highest standards of HR practices at NUS.”
For those with a conventional leg prosthesis, climbing stairs and negotiating uneven terrain are almost insurmountable obstacles. But drawing on ETH expertise, Team Ottobock.X3 has now designed an intelligent prosthesis that helps its wearer move about more freely.
Researchers at ETH Zurich have developed a laser that produces the strongest ultra-short laser pulses to date. In the future, such high power pulses could be used for precision measurements or materials processing.
Glitter is a major pollutant, with the microplastic commonly found in sewage sludge and wastewater, but nowresearchers have identified a type of sustainable glitter that has no impact on the environment.
Ideas, research findings, and real-world results came together in a flurry of intellectual exchanges at the Festival of Ideas 2024 from 16 to 20 September, an event organised by the Lee Kuan Yew School of Public Policy (LKYSPP) to promote dialogue that inspires and shapes solutions to pressing policy issues.
The third edition of the biennial flagship event, which coincides with the School’s 20th anniversary, discussed the theme “Navigating a World in Crisis: Transforming Governance through Asia.” The 130 speakers ranged from LKYSPP academics who research and teach public policy to policymakers and industry practitioners who are directly impacted by policy decisions in their daily work.
“The Festival is an engagement on the great ideas and the most pressing issues of our time,” said Professor Danny Quah, Dean of LKYSPP, adding that it is an occasion for the School to showcase its thinking, for its PhD and Master’s students to display their work, and for colleagues from LKYSPP and around the world to engage in conversation. “This is all part of the school mission – that we inspire changemakers and leadership, we help improve the well-being of people around us and further afield, and we help transform Asia through the experience of good governance.”
Prof Quah was speaking at the opening session of the Festival, which explored China’s role in a shifting global order and the US-China rivalry through a dialogue with distinguished guests Ambassador Chan Heng Chee, Ambassador-at-Large and Professor at the Lee Kuan Yew Centre for Innovative Cities; Mr Lee Yi Shyan, Former Senior Minister of State and Chairman of Business China; and Assistant Professor Selina Ho, Co-Director of the Centre on Asia and Globalisation at LKYSPP. Their discussion touched on the role of smaller nations and the Association of Southeast Asian Nations (ASEAN) in helping to mediate the rivalry, as well as China’s domestic drivers and its ideal world order that inform the superpower’s perspective.
The session set the stage for the rest of the programme, which comprised 42 sessions over five days with various session formats such as lectures, workshops, panel discussions, and book launches. Some of the sessions ran concurrently, and most were accompanied by moderated Q&A segments that yielded valuable insights for the audience, which included NUS students, current and retired politicians, and members of the public interested in policy and governance issues.
On the final day, highlights included sessions that discussed artificial intelligence (AI) governance, the current global state of conflict and peace, and Singapore’s policy approach to managing its most precious resource – its people.
AI and governance in a time of disinformation
In the session on AI and Governance in a Time of Disinformation, representatives from government, media, and major players in the AI industry discussed the complexities of designing and implementing AI governance. The discussion was moderated by Dr Carol Soon, Principal Research Fellow and Head, Society and Culture at the Institute of Policy Studies.
Dr Janil Puthucheary, Senior Minister of State in Singapore’s Ministry of Digital Development and Information, explained that Singapore is handling the disruption from AI similarly to other types of technological disruption. “We have an imperative to make sure that Singapore's people, our economy, our businesses, the jobs, the social cohesion and structure that we have, are able to benefit from whatever technological disruption is coming towards us,” he said.
However, there are unique challenges in this arena, such as the heavy reliance of Singapore’s economy on technology and the country’s close connections to the rest of the world through international policies and practices, globally accepted standards for technological interoperability, and more. Consequently, Singapore will feel the impact of new developments and regulations in AI almost instantly as they continue to emerge.
“We need to be able to develop our approach in real time as the promise as well as the pitfalls of AI play out in the world around us,” said Dr Janil.
A question from the audience about the need for an international AI agency prompted two panellists to share their experience of working with international organisations and why they do not feel such an agency is necessary.
“Instinctively, most people would feel that there is a need for international collaboration and cooperation,” said Mr Jaime Ho, Editor of The Straits Times. “But whether or not the formal structure of an agency is going to help, I seriously doubt it will, based upon the experiences we’ve seen in recent years. People need to agree to be part of it, and an agency is only worth its salt if it’s able to enforce certain things.”
Ms Eunice Huang, Google’s Asia-Pacific Head of AI and Emerging Tech Policy, highlighted that the AI ecosystem comprising governments, industries, and civil society is working on regulations and standards via annual AI Safety Summits and the International Organization for Standardization (ISO). “A lot of international coordination can be done without the need for a new international organisation to be set up, because that entails a lot of investment and administrative things, and the politics of it also gets in the way,” she said.
Conflict and peace in the 21st century
Amid the global focus on ongoing conflicts, speakers at a panel discussion on Conflict and Peace in the 21st Century, moderated by Associate Professor Francesco Mancini, Vice Dean (Executive Education) and Associate Professor in Practice at LKYSPP, called for a greater emphasis to be placed on studying and investing in peace.
Ms Wu Ye-Min, Regional Director, South and Southeast Asia, Centre for Humanitarian Dialogue, urged countries to invest meaningfully in peace by training and placing peacemakers and impartial mediators in the field, which has proved successful in southern Philippines. In Sulu, such initiatives have resulted in decreased violence and allowed local tourism to thrive, with beach resorts being opened in formerly dangerous areas.
Teaching youths about peace-making and de-escalation will not only equip them with the skills for peace, but also bring discussions of conflict resolution into schools, workplaces, and homes to effect local change.
Former diplomat Mr Kishore Mahbubani, Distinguished Fellow at the NUS Asia Research Institute, and Dr Michael Vatikiotis, Senior Advisor, Centre for Humanitarian Dialogue, highlighted the unusual peace that Southeast Asia has enjoyed for the past 45 years. They noted the success of ASEAN in creating positive regional relations, boosting overall economic development, and enabling the region to protect its interests and reduce unwanted interference from other countries.
The speakers warned that climate change is increasingly important as a conflict driver, with the effects already visible in the Sahel region of Africa. Peace-making efforts will be needed to resolve climate-related challenges like fish stock depletion as conditions prompt ocean life to relocate, an example that Ms Wu shared.
Said Mr Mahbubani: “If humanity was intelligent, it would realise that the message from climate change is that… climate change doesn't respect borders at all. We've got to realise that all 8 billion of us are on the same boat; we've got to work together to deal with climate change.”
Closing dialogue with Minister Tan See Leng
The last session of the festival was a dialogue with Dr Tan See Leng, Minister for Manpower and Second Minister for Trade and Industry, Singapore.
In his speech and responses to questions from the audience, Dr Tan shared insights into how Singapore has managed to thrive despite its size and lack of natural resources, by carefully designing and implementing manpower policies like the Central Provident Fund (CPF) and creating a unique tripartite partnership between the government, workers, and businesses.
While the shared values that made Singapore’s success possible remain relevant, the government recognises that the employment landscape and workers’ aspirations and expectations have changed over the years. Hence, the Ministry of Manpower (MOM) will shift its policy approach accordingly, he said.
“From the creation of the CPF system to our unique tripartite partnership, we have shown that innovative policies guided by strong values like meritocracy and fairness will lead to success,” Dr Tan said. “Through the times, MOM will continue to evolve, will continue to adapt.”
“Our singular focus will always be on supporting workers and businesses, driving growth, and fostering inclusive workplaces. I'm confident that as long as we hold true to our ideals and we are bold enough to try out new ideas, we can thrive in this ever-changing world.”
Penslar, Feldman examine plight of Jewish Americans after 10/7 attack
Scholars trace history of group in U.S., discuss why many wrestling with what it means for Israel, their own place in nation’s culture
Christy DeSmith
Harvard Staff Writer
4 min read
In the wake the Oct. 7 terrorist attacks and subsequent pro-Palestinian protests around the U.S., many Jewish Americans have been grappling with their own identities in relationship to Israel.
In a packed talk at Harvard Law School, Noah Feldman, the Felix Frankfurter Professor of Law, and Derek J. Penslar, William Lee Frost Professor of Jewish History, marked the anniversary of the Hamas-led massacre by detailing the historically close ties between American Jewry and the state of Israel today. In separate remarks, each underscored why the majority of Jews in the U.S. have felt profoundly betrayed over the past year.
During the 19th century, Jews residing in North America and western Europe experienced increased economic mobility and social integration. Soon they were openly appealing for political interventions on behalf of persecuted Jews in North Africa and the Russian Empire.
During the 20th century, optimism took root among those living in the U.S. “Jews came to feel themselves to be not unusual or exceptional, but rather exemplary citizens of the republic,” said Penslar, who is also director of the Center for Jewish Studies. “And by that argument, antisemitism was not just bad for the Jews. It was bad for everybody. It was un-American.”
Among American Jews, the movement to create a Jewish state became widely popular during World War II as the Nazis murdered two thirds of the Jews in Europe. But American proponents departed from the tenets of Zionism’s central and eastern European founders, who stressed the importance for Jews to prepare for new lives in historic Palestine.
“American Zionism was not about moving to Israel,” Penslar said. “It was a Zionism that was fundamentally optimistic about Jewish life right here in the United States.”
This dual faith in the U.S. and the state of Israel underpins the betrayal experienced by many American Jews today. “They’re feeling more vulnerable than any time since the Second World War,” Penslar noted.
The reaction is especially acute among Jewish Americans on the left who saw their political allies justify the Hamas-led killings of 1,200 people in Israel, Penslar said. But for the majority of Jewish Americans, he added, “you see it largely in the way people are talking about what’s happening in the universities.”
For one, said Feldman, author of “To Be a Jew Today: A New Guide to God, Israel, and the Jewish People” (2024), the demonstrations were the opposite of the “post-9/11-style condemnation” that Jewish Americans had expected following Oct. 7. But they also represented a possible “ground shift” in political attitudes and moral discourse on Israel.
“The long process by which American Jewish identity came to be primarily bound up in Israel meant that for Jews to hear the view expressed that Israel’s very existence is morally problematic, or maybe wrongful, is sincerely experienced as antisemitic,” Feldman said.
As an additional consequence, Feldman argued that bipartisan support for Israel is now at stake.
“A core strategic accomplishment of the American Jewish community in its Zionism,” Feldman said, is ensuring the American political system’s support for Israel “no matter which party is in power.”
Evidence that a realignment is underway can be seen in Muslim American voters rejecting a staunchly pro-Israel President Joe Biden during Michigan’s Democratic primary last winter or in former President Donald Trump’s more recent appeals to Jewish voters as Israel’s “protector.”
“The big historical question of what did the spring’s conflicts and protests and encampments and responses mean, with a capital m,” Feldman said, “is really going to depend on what happens in November.”
Chiba-Okabe explains his transition from practicing law in Japan to pursuing a Ph.D. in applied math and computational science and how those interests intersect.
Seem like peanut allergies were once rare and now everyone has them?
long read
Surgeon, professor Marty Makary examines damage wrought when medicine closes ranks around inaccurate dogma
Excerpted from “Blind Spots: When Medicine Gets It Wrong, and What It Means for Our Health” by Marty Makary, M.P.H. ’98. Used with the permission of the publisher, Bloomsbury.
“Hi, my name is Chase, and I’ll be your waiter. Does anyone at the table have a nut allergy?”
My two Johns Hopkins students from Africa, Asonganyi Aminkeng and Faith Magwenzi, looked at each other, perplexed.
“What is it with the peanut allergies here?” Asonganyi asked me. “Ever since I landed at JFK from Cameroon, I noticed a food apartheid — food packages either read ‘Contains Tree Nuts’ or ‘Contains No Tree Nuts.’ ”
Asonganyi told me that even on his connecting flight to Baltimore, the flight attendant had made an announcement: “We have someone on the plane with a peanut allergy, so please try not to eat peanuts.” And on his first day at Johns Hopkins, a classmate invited him to dinner. The invite went something like this: 1) Would you like to come over for dinner; and 2) Do you have a peanut or other allergy?
“What’s going on here?” Asonganyi asked with a big smile. “We have no peanut allergies in Africa.”
Faith, who had flown in from Zimbabwe, nodded in agreement.
I looked at them and smiled. “In Egypt, where my family is from, we don’t have peanut allergies either,” I said. “Welcome to America. Peanut allergies are real and can be life-threatening here.”
Their observation reminded me of when my friend’s school banned peanuts from the campus. School administrators actually inquired with security authorities if metal detectors could detect a peanut. And then one day there was an “emergency.” A peanut was found on the floor of a school bus. It was like discovering an IED in Iraq. The kids were ordered to quietly exit the bus single-file until someone arrived to “decontaminate” the bus. Luckily, the peanut did not detonate and harm the public.
How did we get here?
In 1999, researchers at Mount Sinai Hospital estimated the incidence of peanut allergies in children to be 0.6 percent. Most were mild. Then starting in the year 2000, the prevalence began to surge. Doctors began to notice that more and more children affected had severe allergies.
The 1990s was the decade of peanut allergy panic. The media covered children who died of a peanut allergy, and doctors began writing more about the issue, speculating on the growing rate of the problem. The American Academy of Pediatrics (AAP) wanted to respond by telling parents what they should do to protect their kids. There was just one problem: They didn’t know what precautions, if any, parents should take.
Rather than admit that, in the year 2000 the AAP issued a recommendation for children zero to three years old and pregnant and lactating mothers to avoid all peanuts if any child was considered to be at high risk for developing an allergy.
The AAP committee mimicked what the UK health department had recommended two years earlier: total peanut abstinence. The recommendation was technically for high-risk children, but the AAP authors acknowledged that, “The ability to determine which infants are high risk is imperfect.” Having a family member with any allergy or asthma could qualify as “high-risk” using the strictest interpretation. And many well-meaning pediatricians and parents read the recommendation and thought, Why take chances? Instantly, pediatricians adopted a simple mnemonic to teach parents in their offices: “Remember 1-2-3. Age 1: start milk. Age 2: start eggs. Age 3: start peanuts.” A generation of pediatricians was indoctrinated with this mantra.
I did a close read of that 1998 UK health department recommendation to see if it cited any scientific study to back up the decree. I found one sentence stating that moms who eat peanuts are more likely to have children with peanut allergies. In other words, it blamed the moms. The report cited a 1996 British Medical Journal (BMJ) study. So I pulled that up and took a close look.
I couldn’t believe it.
The actual data did not find an association between pregnant moms eating peanuts and a child’s peanut allergy. But that didn’t matter: The train had left the station.
How could “experts” make a recommendation citing a study that did not even support the recommendation?
Bewildered by how the study seemed so badly misconstrued, I called its lead author, Dr. Jonathan Hourihane, a professor of pediatrics in Dublin. He shared the same frustration and told me he had opposed the peanut avoidance guideline when it came out. “It’s ridiculous,” he told me. “It’s not what I wanted people to believe.”
I specifically asked him how he felt about his study being used as the source to justify the sweeping recommendation. “I felt crossed,” he responded, using a little UK slang for feeling betrayed. He had not been consulted on the national guideline.
The 2000 AAP guideline was published in the specialty’s top journal, Pediatrics, activating many pediatricians to evangelize mothers when they brought their babies in for a checkup. Doctors and public health leaders had their new marching orders. Within months, a mass public education crusade was in full swing, and mothers, doing what they thought was best for their children, responded by following the instructions to protect their children.
But despite these efforts, things got worse. By 2004, it was clear that the rate of peanut allergies was going the wrong way. Peanut allergies soared. More concerning, extreme peanut allergies, which can be life-threatening, became commonplace in America.
Suddenly, emergency department visits for peanut anaphylaxis — a life-threatening allergic swelling of the airways — skyrocketed, and schools began enacting peanut bans. By 2007, 18 percent of Virginia schools had banned peanuts altogether. And in 2016, the Parkway School District in Missouri, reported 957 students with documented life-threatening food allergies, most of which were to peanuts. The rate had increased 50 percent from just six years prior, and more than 1,000 percent from a prior generation.
As things got worse, many public health leaders doubled down. If only every parent would comply with the pediatrics association guideline, they thought, we as a country could finally beat down peanut allergies and win the war. The dogma became a self-licking ice cream cone.
But the groupthink could not have been more wrong.
Swimming against the current
Stephen Combs is a salt-of-the-earth pediatrician in rural East Tennessee. At one point, the other pediatricians in Combs’s group noticed something unique about his patients. None of them had peanut allergies. This despite the fact that his colleagues were seeing more and more kids with peanut allergies in their practices. What was going on?
I was curious to learn more about his impressive track record, so I traveled to the beautiful rolling hills of Johnson City, Tennessee, to visit him. (I often learn a lot when I get outside of the bubble of my urban university hospital.)
I discovered that all the pediatricians in Combs’s group were as impressive as he was: making house calls, staying late to see patients, and educating parents on how to raise healthy children. They all practiced pediatrics the same way.
Except for one thing.
Combs had never followed the AAP guideline for young children to avoid peanuts. The reason for his defiance was simple. Combs did his residency at Duke Medical Center in North Carolina, where he trained under world-famous pediatric immunologist Rebecca Buckley. When the AAP guideline came out in 2000 with a big splash, Buckley recognized that it violated a basic principle of immunology known as immune tolerance: the body’s natural way of accepting foreign molecules present early in life. It was like the dirt theory, whereby newborns exposed to dirt, dander, and germs may then have lower allergy and asthma risks. Buckley confidently told her students and residents, including Combs, to ignore the AAP recommendation, and in fact, to do the opposite. She explained that peanut abstinence doesn’t prevent peanut allergies, it causes them.
Her explanation turned out to be prophetic.
Since his training with Buckley, Combs has consistently instructed parents to introduce a touch of peanut butter (mixed with water to avoid a choking risk) as soon as a child is able to eat it. To this day, the thousands of children in East Tennessee lucky enough to have Combs as their pediatrician do not have peanut allergies.
Extrapolating the principle to other potential allergens, Combs also encouraged the early introduction of eggs, milk, strawberries, and even early exposure to dogs and cats. As a result, the children in his practice rarely developed an allergy to these things, and when they did, it was mild.
An embarrassingly simple study
Buckley and her trainees were not alone in bucking the AAP’s guidance. In fact, many experts in immunology had long known of mouse studies showing that avoiding certain foods triggers allergies to those foods. But the laboratory immunology community was largely disconnected from the clinical allergist and the pediatric community.
Gideon Lack, a pediatric allergist and immunologist in London, challenged the UK guideline. It “was not evidence-based,” he wrote in The Lancet in 1998. “Public-health measures may have unintended effects … they could increase the prevalence of peanut allergy.”
Two years later, the same year the AAP issued their peanut avoidance recommendation, he was giving a lecture in Israel on allergies and asked the roughly 200 pediatricians in the audience, “How many of you are seeing kids with a peanut allergy?”
Only two or three raised their hands. Back in London, nearly every pediatrician had raised their hand to the same question.
Startled by the discrepancy, he had a Eureka moment. Many Israeli infants are fed a peanut-based food called Bamba. To him, it was no coincidence.
Lack quickly assembled researchers in Tel Aviv and Jerusalem to launch a formal study. They found that Jewish children in Israel had one-tenth the rate of peanut allergies compared to Jewish children in the UK, suggesting it was not a genetic predisposition, as the medical establishment had assumed. Lack and his Israeli colleagues titled their publication “Early Consumption of Peanuts in Infancy Is Associated with a Low Prevalence of Peanut Allergy.”
However, their publication in 2008 was not enough to uproot the groupthink. Avoiding peanuts had been the correct answer on medical school tests and board exams, which were written and administered by the American Board of Pediatrics. Many in the medical community dismissed Lack’s findings and continued to insist that young children avoid peanuts. For nearly a decade after AAP’s peanut avoidance recommendation, neither the National Institutes of Health’s (NIH’s) National Institute of Allergy and Infectious Diseases (NIAID) nor other institutions would fund a robust study to evaluate the recommendation, to see if it was helping or hurting children.
But things were getting worse. The more health officials implored parents to follow the recommendation, the worse peanut allergies got. The number of children going to the emergency department because of peanut allergies tripled in just one decade (2005–14). It spread like a virus. By 2019, one report estimated that one in every 18 American children had a peanut allergy. Schools began to ban peanuts and regulators met to purge peanuts from childhood snacks as EpiPen sales soared. Pharma exploited the situation by price-gouging the desperate parents and schools. Mylan Pharmaceuticals jacked up the price of an EpiPen from $100 to $600 in the U.S. (It’s $30 in some countries.)
The AAP recommendation had created a vicious cycle. The more prevalent peanut allergies became, the more people avoided peanuts for young children. This, in turn, caused more peanut allergies. Tunnel-vision thinking had created a nightmare scenario for which the only possible solution seemed to be the total eradication of peanuts from the planet.
As things got worse, a dissenting Lack decided to conduct a clinical trial randomizing infants to peanut exposure (at 4-11 months of age) versus no peanut exposure. He found that early peanut exposure resulted in an 86 percent reduction in peanut allergies by the time the child reached age 5 compared to children who followed the AAP recommendation. He blasted his findings to the world in a New England Journal of Medicine publication in 2015, finally proving what immunologists like Buckley had known for decades: Peanut abstinence causes peanut allergies. It was now undeniable; the AAP had it backward.
I reached out to Lack and had breakfast with him when he was traveling to Washington, D.C., for a medical conference in 2024. He told me that his initial hypothesis had been based on an early observation as a pediatrician that kids who got their ears pierced sometimes developed a nickel allergy around the piercing. But kids who had orthodontics didn’t. He realized that kids with orthodontics had prior exposure to nickel in the braces, making them immune. This observation was consistent with the concept of “oral tolerance” that he’d studied in mice experiments conducted at the University of Colorado in the 1990s.
He had an interesting observation from his childhood that reminded him that conventional wisdom can change. His grandfather had a heart attack, which doctors treated with strict bed rest — a recommendation that was eventually replaced with cardiac rehab exercise. As a 6-year-old, Lack recalled that his grandfather was not allowed to leave his bed. The family members had to take him his meals. His doctors managed his damaged heart by weakening it further.
“In science, we tend to get in a rut and then dig in,” he told me. “We have to be open-minded.”
Lack is now recognized as a hero in the field of allergy. But when he did his big study, he was heavily criticized.
It would take the AAP two years after Lack’s randomized trial was published to reverse its 2000 guidance for pediatricians and parents. It would also take two years for the NIH’s NIAID division to issue a report supporting the reversal.
Did they really need two years? Where was the sense of deep remorse? The affected families deserved to have the medical establishment move with a sense of urgency to correct their recommendation immediately following Lack’s definitive study. Hugh Sampson, another trainee of Rebecca Buckley, led the NIAID report that undid the recommendation. He told me that working with the government agency was frustrating. Sampson is one of the country’s leading allergists. When I asked him what he thought about the entire saga, he told me, “The food allergy community has been appropriately chastised [for getting the peanut recommendation wrong].”
An entire generation — millions of children — had been harmed by groupthink, and many still are feeling the effects. Now, at least the faucet of bad advice was turned off.
Millions of workers are also juggling caregiving. Employers need to rethink.
Christina Pazzanese
Harvard Staff Writer
long read
Business School report finds rigid hiring policies, work rules, scheduling hurt employees but also productivity, retention, bottom line
Millions of Americans, from hourly retail staffers to corporate vice presidents, wrestle with the demands of work while parenting young children, caring for a sick spouse or aging parent — or both.
That juggling act is made even tougher by rigid practices and rules set by employers, such as inflexible or unpredictable work schedules, and employers’ failure to grasp how employees are struggling and to provide them with support, according to a new Harvard Business Schoolreport.
That disregard harms both workers and companies. Care-related issues are the single most common reason employees leave the workforce. Companies also pay a price, both directly and indirectly, often in waysthey don’t fully understand, the report found.
The Gazette spoke with its author Joseph B. Fuller ’79, M.B.A. ’81, professor of management practice and co-chair of the Managing the Future of Work project at HBS, about the problem and what companies can do. Interview has been edited for clarity and length.
There are 50 to 60 million caregivers in the nation. They make up the largest portion of a cohort you study known as “hidden workers” — those who want a job or more hours but are thwarted by employer policies. Who is in this group and why do they leave the workforce?
They are people who have some significant care obligations within their household. Those range from things that would strike people as very ordinary — a two-parent household with kids — or it could be something much more exotic — they have a chronically ill child or spouse. An apogee group is what we call the “sandwich generation,” where they’re caring for dependent children, from a newborn to a teenager, but they’re also caring for one or more seniors — a parent, an in-law.
Well over 50 percent of workers report they have some caregiving obligation. The question becomes: Do the terms and conditions of their employment and the nature of that obligation mesh? A lot of traditional expectations of employers, but also coworkers and even customers, don’t always jibe with the cadence of care.
We can see this in things like work schedule. If I have a child with a chronic condition, anything from severe asthma to a behavioral issue, stuff happens. And if I have to go see the principal of the school tomorrow because of a disciplinary problem or it’s a bad air-quality day and my child really shouldn’t be outdoors, I’m going to keep them home from school. As a normal working adult, I don’t have money to hire a service, so I’m going to miss a day of work.
Fifty percent of women who have left the workforce and say they would have preferred to remain working left because they could not reconcile the obligations of the career path they were on with caring for kids.
Primary caregiver by gender and age
Caregiving, along with higher education and healthcare delivery, has among the highest real-dollar increase in cost in the last 10 to 15 years. Childcare is more expensive than it’s ever been in real dollars. If you’re paying $1,200, $1,300 a month for full-day childcare, for an average American job, that would be equal to the average after-tax, discretionary income that a worker is left with at the end of the month. So, economics underlies many people’s considerations. But it’s also a combination of career considerations and the specific caregiving needs of the family.
You found that hiring processes used by employers have contributed to the difficulties caregivers face when trying to return to work after some circumstance forces them to leave the workplace for a time. Why is that?
Unfortunately, several things start happening if your work history gets interrupted. The first is something that virtually every employer uses to assess candidates called the continuity of employment filter, which is used in an AI-powered tool called an applicant tracking system. It asks an employer: If someone has a gap in their work history, how should I treat that? If there’s a gap of more than six months, 50 percent of employers will drop a person from the candidate pool.
How does the struggle to manage work and caregiving typically manifest on the job? Do most employers even realize some workers are having a hard time?
First of all, employees in most companies only go to their boss or to their company to discuss a caregiving issue as a last resort. Their concern is: If I bring this up, I’m certainly not going to be a candidate for promotion. It’s going to affect my performance evaluation. They fear they’ll be viewed as less committed, that they’re going to be suspect.
The biggest two effects of having caregiving responsibilities that are hard to reconcile with your job are absenteeism and presenteeism. Either you’ll miss work, or you’ll be so distracted while you’re at work you won’t get much done.
For a lot of frontline lower-wage jobs, companies have a rule that if someone is late to work three times in a month or has an un-preplanned absence three times in a quarter, by rule you’re just fired. And that’s completely understandable. They’re running complicated operations; they can’t let every store manager make their own decisions, not only because it would be chaos, but also because if somebody in Topeka is getting fired, but a person who did the same thing in Toledo is being kept because the store managers made different decisions and it all ends up in court, the company loses.
And companies, especially human resources functions, hate administering exceptions. Walmart employs a million people. If all of a sudden everything is customized, they’d be out of business in 6 months.
One thing my early research showed is that caregiving concerns are endemic. They affect roughly 80 percent of the workforce some of the time, most of the time, or all of the time. But the way working relationships have been structured by employers for a century rests on propositions like “I’m paying you and providing a decent place to work. I don’t want to intrude on your life, and I don’t want to hear about it. I don’t want to talk about it.”
Besides concerns about fairness, administrative logistics, and lost productivity, there are costs for failing to address the caregiving needs of employees, costs many companies don’t realize they’re paying. What are some of those?
First, there’s a very substantial cost of replacing a worker. It doesn’t matter if they’re fired, or they quit. Even for low-wage workers, the cost is between 25 percent and 35 percent of annual compensation. That’s a good proxy for how much it’s going to cost to replace that worker.
Second, people who have some tenure with the employer have lots of knowledge based on that experience, which makes them more productive. Say they join your company at age 24 or 25 and then after four or five years, they decide to start a family. They’ve got five years of work experience; they have a network inside the company; they may know customers; they know how you do things. But they conclude they can’t stay given the requirements of the job.
“There’s a very substantial cost of replacing a worker. Even for low-wage workers, the cost is between 25 percent and 35 percent of annual compensation.”
Unless employers assist them so they can stay in the job, they give up a productive worker. Their replacement is an unknown quantity. Employers constantly make speculative bets on new employees based on pieces of paper and a couple of interviews to replace a worker the company has a huge amount of data on from personnel files, performance evaluations, etc.
When they replace the worker who leaves because of caregiving conflicts, they incur the direct costs, but they also absorb indirect costs, what we call “tacit knowledge” — how we do things around here. And say that worker is on a team and they’re the glue on that team. Another team member has been thinking about quitting, too. We know from psychological research that person will feel psychological permission to quit if another worker quits.
What my research shows is the more senior you are, and the more money you make, the more likely you are to leave a job because of a caregiving obligation. Employers are always surprised by that. They assume a worker is more likely to leave if they’re low-paid.
A worker in the top quartile of compensation is more likely to leave a job because of a caregiving conflict by a factor of two than a bottom-quartile worker. And that 25 percent to 35 percent cost of replacing a worker goes to 100 percent or more of annual compensation if you’re talking about a top-quartile worker — middle management, upper-middle management, all the way to the executive ranks.
All those things add up. Unfortunately, employers historically aren’t very good at connecting those dots. They don’t understand their own economics.
Why aren’t they better at seeing the full picture of these costs?
Unfortunately, the lack of connection between managers and supervisors of workers, and their lived experience, and the human resources function is really quite surprising. HR, particularly in big companies, just tends to see data. They’re not talking to supervisors who say, “My best workers are leaving regularly and here’s why.” A lot of companies don’t do exit interviews, so they don’t connect data about why somebody leaves in performance reviews. They don’t say, “Is there anything that’s causing you to think about leaving the company?”
What should employers do to remedy this situation?
The first is to realize there’s a big pool of talent out there that’s been marginalized because of caregiving responsibilities.
Employers should review how they search for talent and what conditions they’re putting on applicants in the applicant tracking system and adjust them to include more candidates. There’s this big pool of workers that is being structurally obliged to end up in part-time, low-wage work because of these “disqualifying” factors. I’m not going to say those standards are arbitrary, but they contribute to an artificial shortage of qualified candidates that employers complain about despite policies creating that shortage.
A second is, understand that all your employees are past, current, and future caregivers and that their circumstances will change. Their life path affects their productivity and their propensity to quit or to behave in a way that causes you to fire them.
Understand the care demographics of your workforce. Make their caregiving lives outside work something that’s discussable with their supervisor. There’s a tremendous return on loyalty and engagement from workers who hear from their supervisors.
Do exit interviews, add to your performance review questions like “Have you thought about leaving? What would cause you to do that?” Find out what’s driving absentees and resignations? Look at your own data.
Just invest in having a more sophisticated understanding of your own economics. Because if you do, you’ll make better decisions.
Get the best of the Gazette delivered to your inbox
By subscribing to this newsletter you’re agreeing to our privacy policy
Our brains constantly work to make predictions about what’s going on around us to ensure that we can attend to and consider the unexpected, for instance. A new study examines how this works during consciousness and also breaks down under general anesthesia. The results add evidence to the idea that conscious thought requires synchronized communication — mediated by brain rhythms in specific frequency bands — between basic sensory and higher-order cognitive regions of the brain.
Previously, members of the research team in The Picower Institute for Learning and Memory at MIT and at Vanderbilt University had described how brain rhythms enable the brain to remain prepared to attend to surprises. Cognition-oriented brain regions (generally at the front of the brain) use relatively low-frequency alpha and beta rhythms to suppress processing by sensory regions (generally toward the back of the brain) of stimuli that have become familiar and mundane in the environment (e.g., your co-worker’s music). When sensory regions detect a surprise (e.g., the office fire alarm), they use faster-frequency gamma rhythms to tell the higher regions about it, and the higher regions process that at gamma frequencies to decide what to do (e.g., exit the building).
The new results, published Oct. 7 in the Proceedings of the National Academy of Sciences, show that when animals were under propofol-induced general anesthesia, a sensory region retained the capacity to detect simple surprises but communication with a higher cognitive region toward the front of the brain was lost, making that region unable to engage in its “top-down” regulation of the activity of the sensory region and keeping it oblivious to simple and more complex surprises alike.
What we've got here is failure to communicate
“What we are doing here speaks to the nature of consciousness,” says co-senior author Earl K. Miller, Picower Professor in The Picower Institute for Learning and Memory and MIT’s Department of Brain and Cognitive Sciences. “Propofol general anesthesia deactivates the top-down processes that that underlie cognition. It essentially disconnects communication between the front and back halves of the brain.”
Co-senior author Andre Bastos, an assistant professor in the psychology department at Vanderbilt and a former member of Miller’s MIT lab, adds that the study results highlight the key role of frontal areas in consciousness.
“These results are particularly important given the newfound scientific interest in the mechanisms of consciousness, and how consciousness relates to the ability of the brain to form predictions,” Bastos says.
The brain’s ability to predict is dramatically altered during anesthesia. It was interesting that the front of the brain, areas associated with cognition, were more strongly diminished in their predictive abilities than sensory areas. This suggests that prefrontal areas help to spark an “ignition” event that allows sensory information to become conscious. Sensory cortex activation by itself does not lead to conscious perception. These observations help us narrow down possible models for the mechanisms of consciousness.
Yihan Sophy Xiong, a graduate student in Bastos’ lab who led the study, says the anesthetic reduces the times in which inter-regional communication within the cortex can occur.
“In the awake brain, brain waves give short windows of opportunity for neurons to fire optimally — the ‘refresh rate’ of the brain, so to speak,” Xiong says. “This refresh rate helps organize different brain areas to communicate effectively. Anesthesia both slows down the refresh rate, which narrows these time windows for brain areas to talk to each other and makes the refresh rate less effective, so that neurons become more disorganized about when they can fire. When the refresh rate no longer works as intended, our ability to make predictions is weakened.”
Learning from oddballs
To conduct the research, the neuroscientists measured the electrical signals, “or spiking,” of hundreds of individual neurons and the coordinated rhythms of their aggregated activity (at alpha/beta and gamma frequencies), in two areas on the surface, or cortex, of the brain of two animals as they listened to sequences of tones. Sometimes the sequences would all be the same note (e.g., AAAAA). Sometimes there’d be a simple surprise that the researchers called a “local oddball” (e.g., AAAAB). But sometimes the surprise would be more complicated, or a “global oddball.” For example, after seeing a series of AAAABs, there’d all of a sudden be AAAAA, which violates the global but not the local pattern.
Prior work has suggested that a sensory region (in this case the temporoparietal area, or Tpt) can spot local oddballs on its own, Miller says. Detecting the more complicated global oddball requires the participation of a higher order region (in this case the frontal eye fields, or FEF).
The animals heard the tone sequences both while awake and while under propofol anesthesia. There were no surprises about the waking state. The researchers reaffirmed that top-down alpha/beta rhythms from FEF carried predictions to the Tpt and that Tpt would increase gamma rhythms when an oddball came up, causing FEF (and the prefrontal cortex) to respond with upticks of gamma activity as well.
But by several measures and analyses, the scientists could see these dynamics break down after the animals lost consciousness.
Under propofol, for instance, spiking activity declined overall but when a local oddball came along, Tpt spiking still increased notably but now spiking in FEF didn’t follow suit as it does during wakefulness.
Meanwhile, when a global oddball was presented during wakefulness, the researchers could use software to “decode” representation of that among neurons in FEF and the prefrontal cortex (another cognition-oriented region). They could also decode local oddballs in the Tpt. But under anesthesia the decoder could no longer reliably detect representation of local or global oddballs in FEF or the prefrontal cortex.
Moreover, when they compared rhythms in the regions amid wakeful versus unconscious states they found stark differences. When the animals were awake, oddballs increased gamma activity in both Tpt and FEF and alpha/beta rhythms decreased. Regular, non-oddball stimulation increased alpha/beta rhythms. But when the animals lost consciousness the increase in gamma rhythms from a local oddball was even greater in Tpt than when the animal was awake.
“Under propofol-mediated loss of consciousness, the inhibitory function of alpha/beta became diminished and/or eliminated, leading to disinhibition of oddballs in sensory cortex,” the authors wrote.
Other analyses of inter-region connectivity and synchrony revealed that the regions lost the ability to communicate during anesthesia.
In all, the study’s evidence suggests that conscious thought requires coordination across the cortex, from front to back, the researchers wrote.
“Our results therefore suggest an important role for prefrontal cortex activation, in addition to sensory cortex activation, for conscious perception,” the researchers wrote.
In addition to Xiong, Miller, and Bastos, the paper’s other authors are Jacob Donoghue, Mikael Lundqvist, Meredith Mahnke, Alex Major, and Emery N. Brown.
The National Institutes of Health, The JPB Foundation, and The Picower Institute for Learning and Memory funded the study.
Researchers tested how the brain's ability to judge whether sensory stimuli are novel or not breaks down under anesthesia. Sensory regions at the back of the brain still processed sound, but they lost the ability to communicate about novelty to the front of the brain, where behavioral decisions take place.
Our brains constantly work to make predictions about what’s going on around us to ensure that we can attend to and consider the unexpected, for instance. A new study examines how this works during consciousness and also breaks down under general anesthesia. The results add evidence to the idea that conscious thought requires synchronized communication — mediated by brain rhythms in specific frequency bands — between basic sensory and higher-order cognitive regions of the brain.
Previously, members of the research team in The Picower Institute for Learning and Memory at MIT and at Vanderbilt University had described how brain rhythms enable the brain to remain prepared to attend to surprises. Cognition-oriented brain regions (generally at the front of the brain) use relatively low-frequency alpha and beta rhythms to suppress processing by sensory regions (generally toward the back of the brain) of stimuli that have become familiar and mundane in the environment (e.g., your co-worker’s music). When sensory regions detect a surprise (e.g., the office fire alarm), they use faster-frequency gamma rhythms to tell the higher regions about it, and the higher regions process that at gamma frequencies to decide what to do (e.g., exit the building).
The new results, published Oct. 7 in the Proceedings of the National Academy of Sciences, show that when animals were under propofol-induced general anesthesia, a sensory region retained the capacity to detect simple surprises but communication with a higher cognitive region toward the front of the brain was lost, making that region unable to engage in its “top-down” regulation of the activity of the sensory region and keeping it oblivious to simple and more complex surprises alike.
What we've got here is failure to communicate
“What we are doing here speaks to the nature of consciousness,” says co-senior author Earl K. Miller, Picower Professor in The Picower Institute for Learning and Memory and MIT’s Department of Brain and Cognitive Sciences. “Propofol general anesthesia deactivates the top-down processes that that underlie cognition. It essentially disconnects communication between the front and back halves of the brain.”
Co-senior author Andre Bastos, an assistant professor in the psychology department at Vanderbilt and a former member of Miller’s MIT lab, adds that the study results highlight the key role of frontal areas in consciousness.
“These results are particularly important given the newfound scientific interest in the mechanisms of consciousness, and how consciousness relates to the ability of the brain to form predictions,” Bastos says.
The brain’s ability to predict is dramatically altered during anesthesia. It was interesting that the front of the brain, areas associated with cognition, were more strongly diminished in their predictive abilities than sensory areas. This suggests that prefrontal areas help to spark an “ignition” event that allows sensory information to become conscious. Sensory cortex activation by itself does not lead to conscious perception. These observations help us narrow down possible models for the mechanisms of consciousness.
Yihan Sophy Xiong, a graduate student in Bastos’ lab who led the study, says the anesthetic reduces the times in which inter-regional communication within the cortex can occur.
“In the awake brain, brain waves give short windows of opportunity for neurons to fire optimally — the ‘refresh rate’ of the brain, so to speak,” Xiong says. “This refresh rate helps organize different brain areas to communicate effectively. Anesthesia both slows down the refresh rate, which narrows these time windows for brain areas to talk to each other and makes the refresh rate less effective, so that neurons become more disorganized about when they can fire. When the refresh rate no longer works as intended, our ability to make predictions is weakened.”
Learning from oddballs
To conduct the research, the neuroscientists measured the electrical signals, “or spiking,” of hundreds of individual neurons and the coordinated rhythms of their aggregated activity (at alpha/beta and gamma frequencies), in two areas on the surface, or cortex, of the brain of two animals as they listened to sequences of tones. Sometimes the sequences would all be the same note (e.g., AAAAA). Sometimes there’d be a simple surprise that the researchers called a “local oddball” (e.g., AAAAB). But sometimes the surprise would be more complicated, or a “global oddball.” For example, after seeing a series of AAAABs, there’d all of a sudden be AAAAA, which violates the global but not the local pattern.
Prior work has suggested that a sensory region (in this case the temporoparietal area, or Tpt) can spot local oddballs on its own, Miller says. Detecting the more complicated global oddball requires the participation of a higher order region (in this case the frontal eye fields, or FEF).
The animals heard the tone sequences both while awake and while under propofol anesthesia. There were no surprises about the waking state. The researchers reaffirmed that top-down alpha/beta rhythms from FEF carried predictions to the Tpt and that Tpt would increase gamma rhythms when an oddball came up, causing FEF (and the prefrontal cortex) to respond with upticks of gamma activity as well.
But by several measures and analyses, the scientists could see these dynamics break down after the animals lost consciousness.
Under propofol, for instance, spiking activity declined overall but when a local oddball came along, Tpt spiking still increased notably but now spiking in FEF didn’t follow suit as it does during wakefulness.
Meanwhile, when a global oddball was presented during wakefulness, the researchers could use software to “decode” representation of that among neurons in FEF and the prefrontal cortex (another cognition-oriented region). They could also decode local oddballs in the Tpt. But under anesthesia the decoder could no longer reliably detect representation of local or global oddballs in FEF or the prefrontal cortex.
Moreover, when they compared rhythms in the regions amid wakeful versus unconscious states they found stark differences. When the animals were awake, oddballs increased gamma activity in both Tpt and FEF and alpha/beta rhythms decreased. Regular, non-oddball stimulation increased alpha/beta rhythms. But when the animals lost consciousness the increase in gamma rhythms from a local oddball was even greater in Tpt than when the animal was awake.
“Under propofol-mediated loss of consciousness, the inhibitory function of alpha/beta became diminished and/or eliminated, leading to disinhibition of oddballs in sensory cortex,” the authors wrote.
Other analyses of inter-region connectivity and synchrony revealed that the regions lost the ability to communicate during anesthesia.
In all, the study’s evidence suggests that conscious thought requires coordination across the cortex, from front to back, the researchers wrote.
“Our results therefore suggest an important role for prefrontal cortex activation, in addition to sensory cortex activation, for conscious perception,” the researchers wrote.
In addition to Xiong, Miller, and Bastos, the paper’s other authors are Jacob Donoghue, Mikael Lundqvist, Meredith Mahnke, Alex Major, and Emery N. Brown.
The National Institutes of Health, The JPB Foundation, and The Picower Institute for Learning and Memory funded the study.
Researchers tested how the brain's ability to judge whether sensory stimuli are novel or not breaks down under anesthesia. Sensory regions at the back of the brain still processed sound, but they lost the ability to communicate about novelty to the front of the brain, where behavioral decisions take place.
For two days at The Picower Institute for Learning and Memory at MIT, participants in the Kuggie Vallee Distinguished Lectures and Workshops celebrated the success of women in science and shared strategies to persist through, or better yet dissipate, the stiff headwinds women still face in the field.
“Everyone is here to celebrate and to inspire and advance the accomplishments of all women in science,” said host Li-Huei Tsai, Picower Professor in the Department of Brain and Cognitive Sciences and director of the Picower Institute, as she welcomed an audience that included scores of students, postdocs, and other research trainees. “It is a great feeling to have the opportunity to showcase examples of our successes and to help lift up the next generation.”
Tsai earned the honor of hosting the event after she was named a Vallee Visiting Professor in 2022 by the Vallee Foundation. Foundation president Peter Howley, a professor of pathological anatomy at Harvard University, said the global series of lectureships and workshops were created to honor Kuggie Vallee, a former Lesley College professor who worked to advance the careers of women.
During the program Sept. 24-25, speakers and audience members alike made it clear that helping women succeed requires both recognizing their achievements and resolving to change social structures in which they face marginalization.
Inspiring achievements
Lectures on the first day featured two brain scientists who have each led acclaimed discoveries that have been transforming their fields.
Michelle Monje, a pediatric neuro-oncologist at Stanford University whose recognitions include a MacArthur Fellowship, described her lab’s studies of brain cancers in children, which emerge at specific times in development as young brains adapt to their world by wiring up new circuits and insulating neurons with a fatty sheathing called myelin. Monje has discovered that when the precursors to myelinating cells, called oligodendrocyte precursor cells, harbor cancerous mutations, the tumors that arise — called gliomas — can hijack those cellular and molecular mechanisms. To promote their own growth, gliomas tap directly into the electrical activity of neural circuits by forging functional neuron-to-cancer connections, akin to the “synapse” junctions healthy neurons make with each other. Years of her lab’s studies, often led by female trainees, have not only revealed this insidious behavior (and linked aberrant myelination to many other diseases as well), but also revealed specific molecular factors involved. Those findings, Monje said, present completely novel potential avenues for therapeutic intervention.
“This cancer is an electrically active tissue and that is not how we have been approaching understanding it,” she said.
Erin Schuman, who directs the Max Planck Institute for Brain Research in Frankfurt, Germany, and has won honors including the Brain Prize, described her groundbreaking discoveries related to how neurons form and edit synapses along the very long branches — axons and dendrites — that give the cells their exotic shapes. Synapses form very far from the cell body where scientists had long thought all proteins, including those needed for synapse structure and activity, must be made. In the mid-1990s, Schuman showed that the protein-making process can occur at the synapse and that neurons stage the needed infrastructure — mRNA and ribosomes — near those sites. Her lab has continued to develop innovative tools to build on that insight, cataloging the stunning array of thousands of mRNAs involved, including about 800 that are primarily translated at the synapse, studying the diversity of synapses that arise from that collection, and imaging individual ribosomes such that her lab can detect when they are actively making proteins in synaptic neighborhoods.
Persistent headwinds
While the first day’s lectures showcased examples of women’s success, the second day’s workshops turned the spotlight on the social and systemic hindrances that continue to make such achievements an uphill climb. Speakers and audience members engaged in frank dialogues aimed at calling out those barriers, overcoming them, and dismantling them.
Susan Silbey, the Leon and Anne Goldberg Professor of Humanities, Sociology and Anthropology at MIT and professor of behavioral and policy sciences in the MIT Sloan School of Management, told the group that as bad as sexual harassment and assault in the workplace are, the more pervasive, damaging, and persistent headwinds for women across a variety of professions are “deeply sedimented cultural habits” that marginalize their expertise and contributions in workplaces, rendering them invisible to male counterparts, even when they are in powerful positions. High-ranking women in Silicon Valley who answered the “Elephant in the Valley” survey, for instance, reported high rates of many demeaning comments and demeanor, as well as exclusion from social circles. Even U.S. Supreme Court justices are not immune, she noted, citing research showing that for decades female justices have been interrupted with disproportionate frequency during oral arguments at the court. Silbey’s research has shown that young women entering the engineering workforce often become discouraged by a system that appears meritocratic, but in which they are often excluded from opportunities to demonstrate or be credited for that merit and are paid significantly less.
“Women’s occupational inequality is a consequence of being ignored, having contributions overlooked or appropriated, of being assigned to lower-status roles, while men are pushed ahead, honored and celebrated, often on the basis of women’s work,” Silbey said.
Often relatively small in numbers, women in such workplaces become tokens — visible as different, but still treated as outsiders, Silbey said. Women tend to internalize this status, becoming very cautious about their work while some men surge ahead in more cavalier fashion. Silbey and speakers who followed illustrated the effect this can have on women’s careers in science. Kara McKinley, an assistant professor of stem cell and regenerative biology at Harvard, noted that while the scientific career “pipeline” in some areas of science is full of female graduate students and postdocs, only about 20 percent of natural sciences faculty positions are held by women. Strikingly, women are already significantly depleted in the applicant pools for assistant professor positions, she said. Those who do apply tend to wait until they are more qualified than the men they are competing against.
McKinley and Silbey each noted that women scientists submit fewer papers to prestigious journals, with Silbey explaining that it’s often because women are more likely to worry that their studies need to tie up every loose end. Yet, said Stacie Weninger, a venture capitalist and president of the F-Prime Biomedical Research Initiative and a former editor at Cell Press, women were also less likely than men to rebut rejections from journal editors, thereby accepting the rejection even though rebuttals sometimes work.
Several speakers, including Weninger and Silbey, said pedagogy must change to help women overcome a social tendency to couch their assertions in caveats when many men speak with confidence and are therefore perceived as more knowledgeable.
At lunch, trainees sat in small groups with the speakers. They shared sometimes harrowing personal stories of gender-related difficulties in their young careers and sought advice on how to persist and remain resilient. Schuman advised the trainees to report mistreatment, even if they aren’t confident that university officials will be able to effect change, to at least make sure patterns of mistreatment get on the record. Reflecting on discouraging comments she experienced early in her career, Monje advised students to build up and maintain an inner voice of confidence and draw upon it when criticism is unfair.
“It feels terrible in the moment, but cream rises,” Monje said. “Believe in yourself. It will be OK in the end.”
Lifting each other up
Speakers at the conference shared many ideas to help overcome inequalities. McKinley described a program she launched in 2020 to ensure that a diversity of well-qualified women and non-binary postdocs are recruited for, and apply for, life sciences faculty jobs: the Leading Edge Symposium. The program identifies and names fellows — 200 so far — and provides career mentoring advice, a supportive community, and a platform to ensure they are visible to recruiters. Since the program began, 99 of the fellows have gone on to accept faculty positions at various institutions.
In a talk tracing the arc of her career, Weninger, who trained as a neuroscientist at Harvard, said she left bench work for a job as an editor because she wanted to enjoy the breadth of science, but also noted that her postdoc salary didn’t even cover the cost of child care. She left Cell Press in 2005 to help lead a task force on women in science that Harvard formed in the wake of comments by then-president Lawrence Summers widely understood as suggesting that women lacked “natural ability” in science and engineering. Working feverishly for months, the task force recommended steps to increase the number of senior women in science, including providing financial support for researchers who were also caregivers at home so they’d have the money to hire a technician. That extra set of hands would afford them the flexibility to keep research running even as they also attended to their families. Notably, Monje said she does this for the postdocs in her lab.
A graduate student asked Silbey at the end of her talk how to change a culture in which traditionally male-oriented norms marginalize women. Silbey said it starts with calling out those norms and recognizing that they are the issue, rather than increasing women’s representation in, or asking them to adapt to, existing systems.
“To make change, it requires that you do recognize the differences of the experiences and not try to make women exactly like men, or continue the past practices and think, ‘Oh, we just have to add women into it’,” she said.
Silbey also praised the Kuggie Vallee event at MIT for assembling a new community around these issues. Women in science need more social networks where they can exchange information and resources, she said.
“This is where an organ, an event like this, is an example of making just that kind of change: women making new networks for women,” she said.
A two-day event celebrated the successes of women in science but also examined reasons for persistent inequality. At a workshop on the second day, the audience heard from a panel of scientists including (left to right) Michelle Monje, Susan Silbey, Kara McKinley, Erin Schuman, Stacie Weninger, and moderator Elly Nedivi, the William R. and Linda R. Young Professor in The Picower Institute.
3 million Americans have dental implants — but procedure wasn’t always ‘routine’
Surgeon recounts changes in field over 40-year career — from titanium screws to bone regeneration — as he accepts Goldhaber Award
Clea Simon
Harvard Correspondent
4 min read
Celebrating 40 years of progress in the field of dental implant surgery, the Harvard School of Dental Medicine honored Daniel Buser — University of Bern Professor and Harvard visiting lecturer — with the Paul Goldhaber Award on Monday. While accepting the honor, the most prestigious granted by the Dental School, the renowned dental surgeon gave an address that spanned both his academic career and the advances in the field, four decades “of pioneers and big breakthroughs.”
Since 2000, Buser said, dental implant surgery has become both “routine” and “highly successful.” Nearly 3 million American have dental implants, a number that is growing by 500,000 annually, according to the American Academy of Implant Dentistry. Despite a trend toward older patients, with good hygiene (and a warning to “not smoke too much”), the procedure has success and survival rates of 95 percent and 98 percent. “Much better than conventional prosthetics,” said Buser. “Better than hip replacement.” He added the field has cut the number of surgical interventions, decreased pain and morbidity, and shortened healing and treatment times.
Over his career, Buser saw a “paradigm shift” in the surfaces of surgical dental implants. While the earliest implants were anchored by a smooth, polished titanium screw (known as the Brånemark surface after Swedish researcher Per-Ingvar Brånemark), the field trended toward “micro-rough” surfaces after research found that they reduced average healing time and failure rates.
Dental implants have success and survival rates of 95 percent and 98 percent. “Better than hip replacement.”
Daniel Buser
The next big breakthrough, explained Buser, was guided bone regeneration, which uses barrier membranes to direct the growth of new bone around an implant. In Bern, he said, the first clinical case, involving the extraction of a premolar replaced by an implant, “worked very nice.”
The technique did have complications, however. Collapse of the membranes and difficulties in healing set off a search for better materials. With research on miniature pigs, “We learned very quickly we needed something to support the membrane.” The answer? Two different fillers, the so-called composite graft, which has now become standard.
Further research has revealed that the placement of the implant has an impact, as does the so-called micro-gap, the space between the various components of the implant. Looking at “possible ways to minimize or eliminate inflammation,” said Buser, “you want to either decrease or eliminate the micro-gap, seal it, or you can physically move the micro-gap up,” higher on the jaw, which reduces the chance of inflammation and bone loss.
Another advance has been bone conditioning, which uses a patient’s own bone chips. Bathing the chips in a mix of the patient’s blood and an isotype bath known as Ringer’s solution to avoid clotting prompts the bone chips to release a growth factor. “We have seen that two of the most important growth factors can be detected very quickly,” he said, opening up more options for healing.
Such advances have been aided by the introduction of cone-beam-computed tomography, a volumetric scanning machine that provides 3D models and serves as a diagnostic tool. Now routine, “this gives us so much information.”
With so many options — such as whether to do immediate implant placement after an extraction or to wait — the current challenge is “all about case selection,” said Buser, noting that each case should dictate an individual approach. Adding that most complications are caused by poorly trained practitioners, he stressed the importance of the kind of teamwork that encourages both research and personal growth. “You have to have lifelong learning. You have to go to conferences to be updated.”
Speaking to a full house as well as an online audience, Buser also touched on being a mentor, a role shared by the late Goldhaber, who served as the dean of the Dental School for 22 years.
“For an academic career, you must have a good mentor,” said Buser. Naming such luminaries in the field as the late André Schroeder, professor of operative dentistry and endodontics at the University of Bern; Robert Schenck, whom Buser recruited to oral and maxillofacial surgery from his original field of orthopedics; and Ray Williams, P.D. ’73, a 2013 Goldhaber honoree and former associate dean for postdoctoral education and head of the Department of Periodontology at HSDM, he encouraged young academics to seek out those who could help them.
He also stressed that such collaboration is a two-way street. “You must be a good team player.” Noting the lessons learned from his own participation in the 1980 Swiss champion Bern handball team, he said, “You achieve much more with a good team.”
The program, now in its fifth year, recognizes and supports outstanding scholars primed to make important contributions in their fields. The 2024 cohort includes disciplines spanning the humanities, engineering, the sciences and the social sciences.
NUS and Imperial College London (Imperial) have announced a new partnership to strengthen research collaborations. The three-year partnership will see the two universities explore cooperation in early-stage research and ideas that might not otherwise be pursued.
The universities will explore potential research projects in areas such as health, sustainability, artificial intelligence and the digital economy.
NUS has long-standing links with Imperial, and the two universities’ new partnership will strengthen links between London and Singapore.
The new agreement will help fund exploratory research and see increased mobility of scientists and students between NUS and Imperial, with researchers spending time in each other’s laboratories in Singapore and London working on joint projects and sharing knowledge and data.
The agreement was signed at NUS during a visit by Imperial’s President Professor Hugh Brady and his delegation.
Professor Tan Eng Chye, President of NUS, said: “Imperial College London has been a valued partner of NUS and we are proud to deepen our collaboration through this new initiative. This latest partnership empowers academics, researchers and students from two leading global universities to drive influential research and build impactful networks. We share a common dedication to boosting exploratory research which is crucial for developing innovative solutions to the wicked problems of today. We look forward to the enriching exchange of knowledge and experience in the coming years.”
Professor Hugh Brady, President of Imperial College London, said: “This exciting partnership with the National University of Singapore demonstrates our shared commitment to tackling global challenges through world-leading research and innovation. By joining forces with one of Singapore’s top universities we are poised to make significant advancements in areas such as sustainability, healthcare innovation, and the digital economy. This collaboration will not only enhance our research capabilities but also provide invaluable opportunities for our staff and students to expand their international networks and experience.”
NUS and Imperial have worked together successfully on many previous projects including successfully engineering common baker’s yeast to produce a key ingredient for dementia medicines.
Multimaterial 3D printing enables makers to fabricate customized devices with multiple colors and varied textures. But the process can be time-consuming and wasteful because existing 3D printers must switch between multiple nozzles, often discarding one material before they can start depositing another.
Researchers from MIT and Delft University of Technology have now introduced a more efficient, less wasteful, and higher-precision technique that leverages heat-responsive materials to print objects that have multiple colors, shades, and textures in one step.
Their method, called speed-modulated ironing, utilizes a dual-nozzle 3D printer. The first nozzle deposits a heat-responsive filament and the second nozzle passes over the printed material to activate certain responses, such as changes in opacity or coarseness, using heat.
By controlling the speed of the second nozzle, the researchers can heat the material to specific temperatures, finely tuning the color, shade, and roughness of the heat-responsive filaments. Importantly, this method does not require any hardware modifications.
The researchers developed a model that predicts the amount of heat the “ironing” nozzle will transfer to the material based on its speed. They used this model as the foundation for a user interface that automatically generates printing instructions which achieve color, shade, and texture specifications.
One could use speed-modulated ironing to create artistic effects by varying the color on a printed object. The technique could also produce textured handles that would be easier to grasp for individuals with weakness in their hands.
“Today, we have desktop printers that use a smart combination of a few inks to generate a range of shades and textures. We want to be able to do the same thing with a 3D printer — use a limited set of materials to create a much more diverse set of characteristics for 3D-printed objects,” says Mustafa Doğa Doğan PhD ’24, co-author of a paper on speed-modulated ironing.
This project is a collaboration between the research groups of Zjenja Doubrovski, assistant professor at TU Delft, and Stefanie Mueller, the TIBCO Career Development Professor in the Department of Electrical Engineering and Computer Science (EECS) at MIT and a member of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). Doğan worked closely with lead author Mehmet Ozdemir of TU Delft; Marwa AlAlawi, a mechanical engineering graduate student at MIT; and Jose Martinez Castro of TU Delft. The research will be presented at the ACM Symposium on User Interface Software and Technology.
Modulating speed to control temperature
The researchers launched the project to explore better ways to achieve multiproperty 3D printing with a single material. The use of heat-responsive filaments was promising, but most existing methods use a single nozzle to do printing and heating. The printer always needs to first heat the nozzle to the desired target temperature before depositing the material.
However, heating and cooling the nozzle takes a long time, and there is a danger that the filament in the nozzle might degrade as it reaches higher temperatures.
To prevent these problems, the team developed an ironing technique where material is printed using one nozzle, then activated by a second, empty nozzle which only reheats it. Instead of adjusting the temperature to trigger the material response, the researchers keep the temperature of the second nozzle constant and vary the speed at which it moves over the printed material, slightly touching the top of the layer.
“As we modulate the speed, that allows the printed layer we are ironing to reach different temperatures. It is similar to what happens if you move your finger over a flame. If you move it quickly, you might not be burned, but if you drag it across the flame slowly, your finger will reach a higher temperature,” AlAlawi says.
The MIT team collaborated with the TU Delft researchers to develop the theoretical model that predicts how fast the second nozzle must move to heat the material to a specific temperature.
The model correlates a material’s output temperature with its heat-responsive properties to determine the exact nozzle speed which will achieve certain colors, shades, or textures in the printed object.
“There are a lot of inputs that can affect the results we get. We are modeling something that is very complicated, but we also want to make sure the results are fine-grained,” AlAlawi says.
The team dug into scientific literature to determine proper heat transfer coefficients for a set of unique materials, which they built into their model. They also had to contend with an array of unpredictable variables, such as heat that may be dissipated by fans and the air temperature in the room where the object is being printed.
They incorporated the model into a user-friendly interface that simplifies the scientific process, automatically translating the pixels in a maker’s 3D model into a set of machine instructions that control the speed at which the object is printed and ironed by the dual nozzles.
Faster, finer fabrication
They tested their approach with three heat-responsive filaments. The first, a foaming polymer with particles that expand as they are heated, yields different shades, translucencies, and textures. They also experimented with a filament filled with wood fibers and one with cork fibers, both of which can be charred to produce increasingly darker shades.
The researchers demonstrated how their method could produce objects like water bottles that are partially translucent. To make the water bottles, they ironed the foaming polymer at low speeds to create opaque regions and higher speeds to create translucent ones. They also utilized the foaming polymer to fabricate a bike handle with varied roughness to improve a rider’s grip.
Trying to produce similar objects using traditional multimaterial 3D printing took far more time, sometimes adding hours to the printing process, and consumed more energy and material. In addition, speed-modulated ironing could produce fine-grained shade and texture gradients that other methods could not achieve.
In the future, the researchers want to experiment with other thermally responsive materials, such as plastics. They also hope to explore the use of speed-modulated ironing to modify the mechanical and acoustic properties of certain materials.
Speed-modulated ironing enables makers to fabricate objects with varied colors and textures, like the owls pictured here, using only one material with high precision. The technique is faster and produces less waste than other methods.
Seven early-stage startups are to receive between them $11.7 million in investment, an outcome of the University of Melbournes strategy to drive research translation by supporting the growth of more early-stage ventures affiliated with the University and its medical research institute partners.
A reservoir of virus in the body may explain why some people experience long COVID symptoms
BWH Communications
4 min read
Researchers found people with wide-ranging long COVID symptoms were twice as likely to have SARS-CoV-2 proteins in their blood, compared to those without long COVID symptoms, according to a study out of Harvard-affiliated Brigham and Women’s Hospital.
Commonly reported long COVID symptoms included fatigue, brain fog, muscle pain, joint pain, back pain, headache, sleep disturbance, loss of smell or taste, and gastrointestinal symptoms.
Results are published in Clinical Microbiology and Infection.
Specifically, the team found that 43 percent of those with long COVID symptoms affecting three major systems in the body, including cardiopulmonary, musculoskeletal, and neurologic systems, tested positive for viral proteins within 1 to 14 months of their positive COVID test. But only 21 percent of those who didn’t report any long COVID symptoms tested positive for the SARS-CoV-2 biomarkers in this same period.
“If we can identify a subset of people who have persistent viral symptoms because of a reservoir of virus in the body, we may be able to treat them with antivirals to alleviate their symptoms.”
Zoe Swank
“If we can identify a subset of people who have persistent viral symptoms because of a reservoir of virus in the body, we may be able to treat them with antivirals to alleviate their symptoms,” said lead author Zoe Swank, a postdoctoral research fellow in the Department of Pathology at BWH.
The study analyzed 1,569 blood samples collected from 706 people, including 392 participants from the National Institutes of Health-supported Researching COVID to Enhance Recovery (RECOVER) Initiative, who had previously tested positive for a COVID infection. Using Simoa, an ultrasensitive test for detecting single molecules, researchers looked for whole and partial proteins from the SARS-CoV-2 virus. They also analyzed data from the participants’ long COVID symptoms, using electronic medical chart information or surveys that were gathered at the same time as the blood samples were taken.
It’s possible that a persistent infection explains some — but not all — of the long COVID sufferers’ symptoms. If this is the case, testing and treatment could aid in identifying patients who may benefit from treatments such as antiviral medications.
A condition with more than one cause
One of the questions raised by the study is why more than half of patients with wide-ranging long COVID symptoms tested negative for persistent viral proteins.
“This finding suggests there is likely more than one cause of long COVID,” said David Walt, a professor of pathology at BWH and principal investigator on the study. “For example, another possible cause of long-COVID symptoms could be that the virus harms the immune system, causing immune dysfunction to continue after the virus is cleared.”
“Another possible cause of long-COVID symptoms could be that the virus harms the immune system, causing immune dysfunction to continue after the virus is cleared.”
David Walt
To better understand whether an ongoing infection is behind some people’s long COVID symptoms, Swank, Walt, and other researchers are currently conducting follow-up studies. They’re analyzing blood samples and symptom data in larger groups of patients, including people of wide age ranges and those with compromised immune symptoms. This way, they can also see if some people are more likely to have persistent virus in the body.
“There is still a lot that we don’t know about how this virus affects people,” said David C. Goff, a senior scientific program director for the RECOVER Observational Consortium Steering Committee and director of the Division of Cardiovascular Sciences at the National Heart, Lung, and Blood Institute (NHLBI), part of NIH. “These types of studies are critical to help investigators better understand the mechanisms underlying long COVID — which will help bring us closer to identifying the right targets for treatment.”
Goff added that these results also support ongoing efforts to study antiviral treatments.
The SARS-CoV-2 blood test developed by Brigham and Women’s researchers is also currently being used in a national study, called RECOVER-VITAL, that is testing whether an antiviral drug helps patients recover from long COVID. The RECOVER-VITAL trial will test the patients’ blood before and after treatment with an antiviral to see if treatment eliminates persistent viral proteins in the blood.
The idea that a virus can stay in the body and cause ongoing symptoms months after an infection isn’t unique to COVID.
“Other viruses are associated with similar post-acute syndromes,” said Swank. She noted animal studies have found Ebola and Zika proteins in tissues post-infection, and these viruses have also been associated with post-infection illness.
Funding for this work came from the National Institutes of Health (NIH) and Barbara and Amos Hostetter.
Researchers uncovered how twisting layers of a material can generate a mysterious electron-path-deflecting effect, unlocking new possibilities for controlling light and electrons in quantum materials.
Growing up in Taiwan, Jane-Jane Chen excelled at math and science, which, at that time, were promoted heavily by the government, and were taught at a high level. Learning rudimentary English as well, the budding scientist knew she wanted to come to the United States to continue her studies, after she earned a bachelor of science in agricultural chemistry from the National Taiwan University in Taipei.
But the journey to becoming a respected scientist, with many years of notable National Institutes of Health (NIH) and National Science Foundation-funded research findings, would require Chen to be uncommonly determined, to move far from her childhood home, to overcome cultural obstacles — and to have the energy to be a trailblazer — in a field where barriers to being a woman in science were significantly higher than they are today.
Today, Chen is looking back on her journey, and on her long career as a principal research scientist at the MIT Institute for Medical Engineering and Science (IMES), a position from which she recently retired after 45 dedicated years.
At MIT, Chen established herself as an internationally recognized authority in the field of blood cell development — specifically red blood cells, says Lee Gehrke, the Hermann L.F. Helmholtz Professor and core faculty in IMES, professor of microbiology and immunobiology and health science and technology at Harvard Medical School, and one of the scientists Chen worked with most closely.
“Red cells are essential because they carry oxygen to our cells and tissues, requiring iron in the form of a co-factor called heme,” Gehrke says. “Both insufficient heme availability and excess heme are detrimental to red cell development, and Dr. Chen explored the molecular mechanisms allowing cells to adapt to variable heme levels to maintain blood cell production.”
During her MIT career, Chen produced potent biochemistry research, working with heme-regulated eIF2 alpha kinase (which was discovered as the heme-regulated inhibitor of translation, HRI) and regulation of gene expression at translation relating to anemia, including:
cloning of the HRI cDNA, enabling groundbreaking new discoveries of HRI in the erythroid system and, notably, most recently in the brain neuronal system upon mitochondrial stress and in cancers;
elucidating the biochemistry of heme-regulation of HRI;
generating universal HRI knockout mice as a valuable research tool to study HRI’s functions in vivo in the setting of the whole animal; and
establishing HRI as a master translation regulator for erythropoiesis under stress and diseases.
“Dr. Chen’s signature discovery is the molecular cloning of the cDNA of the heme regulated inhibitor protein (HRI), a master regulatory protein in gene expression under stress and disease conditions,” Gehrke says, adding that Chen “subsequently devoted her career to defining a molecular and biochemical understanding of this key protein kinase” and that she “has also contributed several invited review articles on the subject of red cell development, and her papers are seminal contributions to her field.”
Forging her path
Shortly after graduating college, in 1973, Chen received a scholarship to come to California to study for her PhD in biochemistry at the School of Medicine of the University of Southern California. In Taiwan, Chen recalls, the demographic balance between male and female students was even, about 50 percent for each. Once she was in medical school in the United States, she found there were fewer female students, closer to 30 percent at that time, she recalls.
But she says she was fortunate to have important female mentors while at USC, including her PhD advisor, Mary Ellen Jones, a renowned biochemist who is notable for her discovery of carbamyl phosphate, a chemical substance that is key to the biosynthesis of both pyrimidine nucleotides, and arginine and urea. Jones, whom The New York Times called a “crucial researcher on DNA” and a foundational basic cancer researcher, had worked with eventual Nobel laureate Fritz Lipmann at Massachusetts General Hospital.
When Chen arrived, while there were other Taiwanese students at USC, there were not many at the medical school. Chen says she bonded with a young female scientist and student from Hong Kong and with another female student who was Korean and Chinese, but who was born in America. Forming these friendships was crucial for blunting the isolation she could sometimes feel as a newcomer to America, particularly her connection with the American-born young woman: “She helped me a lot with getting used to the language,” and the culture, Chen says. “It was very hard to be so far away from my family and friends,” she adds. “It was the very first time I had left home. By coincidence, I had a very nice roommate who was not Chinese, but knew the Chinese language conversationally, so that was so lucky … I still have the letters that my parents wrote to me. I was the only girl, and the eldest child (Chen has three younger brothers), so it was hard for all of us.”
“Mostly, the culture I learned was in the lab,” Chen remembers. “I had to work a long day in the lab, and I knew it was such a great opportunity — to go to seminars with professors to listen to speakers who had won, or would win, Nobel Prizes. My monthly living stipend was $300, so that had to stretch far. In my second year, more of my college friends had come to the USC and Caltech, and I began to have more interactions with other Taiwanese students who were studying here.”
Chen's first scientific discovery at Jones’ laboratory was that the fourth enzyme of the pyrimidine biosynthesis, dihydroorotate dehydrogenase, is localized in the inner membrane of the mitochondria. As it more recently turned out, this enzyme plays dual roles not only for pyrimidine biosynthesis, but also for cellular redox homeostasis, and has been demonstrated to be an important target for the development of cancer treatments.
Coming to MIT
After receiving her degree, Chen received a postdoctoral fellowship to work at the Roche Institute of Molecular Biology, in New Jersey, for nine months. In 1979, she married Zong-Long Liau, who was then working at MIT Lincoln Laboratory, from where he also recently retired. She accepted a postdoctoral position to continue her scientific training and pursuit at the laboratory of Irving M. London at MIT, and Jane-Jane and Zong-Long have lived in the Boston area ever since, raising two sons.
Looking back at her career, Chen says she is most proud of “being an established woman scientist with decades of NIH findings, and for being a mother of two wonderful sons.” During her time at MIT and IMES, she has worked with many renowned scientists, including Gehrke and London, professor of biology at MIT, professor of medicine at Harvard Medical School (HMS), founding director of the Harvard-MIT Program in Health Sciences and Technology (HST), and a recognized expert in molecular regulation of hemoglobin synthesis. She says that she is also in debt to the colleagues and collaborators at HMS and Children’s Hospital Boston for their scientific interests and support at the time when her research branched into the field of hematology, far different from her expertise in biochemistry. All of them are HST-educated physician scientists, including Stuart H. Orkin, Nancy C. Andrews, Mark D. Fleming, and Vijay G. Sankaran.
“We will miss Dr. Chen’s sage counsel on all matters scientific and communal,” says Elazer R. Edelman, the Edward J. Poitras Professor in Medical Engineering and Science, and the director of the Center for Clinical and Translational Research (CCTR), who was the director of IMES when Chen retired in June. “For generations, she has been an inspiration and guide to generations of students and established leaders across multiple communities — a model for all.”
She says her life in retirement “is a work in progress” — but she is working on a scientific review article, so that she can have “my last words on the research topics of my lab for the past 40 years.” Chen is pondering writing a memoir “reflecting on the journey of my life thus far, from Taiwan to MIT.” She also plans to travel to Taiwan more frequently, to better nurture and treasure the relationships with her three younger brothers, one of whom lives in Los Angeles.
She says that in looking back, she is grateful to have participated in a special grant application that was awarded from the National Science Foundation, aimed at helping women scientists to get their careers back on track after having a family. And she says she also remembers the advice of a female scientist in Jones’ lab during her last year of graduate study, who had stepped back from her research for a while after having two children, “She was not happy that she had done that, and she told me: Never drop out, try to always keep your hands in the research, and the work. So that is what I did.”
“It’s surprisingly hard to find a coherent vision of what a truly just society, grounded in [classical] liberal principles, would actually look like,” said Chandler, an economist and philosopher at the London School of Economics, during a talk Monday with Eric Beerbohm, director of the Safra Center for Ethics at Harvard.
The pair discussed Chandler’s new book, “Free and Equal: A Manifesto for a Just Society,” in which he defends the ideas of the late John Rawls, the renowned 20th century Harvard political philosopher, and attempts to apply them to the economic and political issues of today.
The book offers a full-throated defense of liberal egalitarianism, as Rawls outlined it, and then tries to “bridge the gap between Rawls’ quite abstract, high-level principles and a whole range of practical policy questions,” Chandler said.
Rawls was a deeply influential political, moral and legal philosopher who taught at Harvard from 1962 until his death in 2002 and is best known for his 1971 opus, “A Theory of Justice.”
His theory of “justice as fairness” envisions a society in which every person has an equal right to the basic liberties and opportunities, and where inequalities exist, those with the least power or advantage should be prioritized. And he begins with a thought experiment: Would we design a more just society if nobody knew in advance whether they’d be among the most powerful or the most vulnerable?
Chandler was first inspired to explore this question while he was at Harvard in 2008 on a one-year Henry Fellowship. He wondered why economics and political theory appeared to have drifted apart for progressives since Rawls’ heyday and whether he could find a way to reunite them.
One objective of the book, he explained, was to offer an accessible summary of Rawls’ ideas to a non-academic public and to address some of the common criticisms and misunderstandings about him.
“I think one of the reasons Rawls hasn’t had as much influence as he might have had on public policy is that he said so little about the practical implications of his ideas,” he said.
“I particularly wanted to bring out some of the communitarian aspects of Rawls’ thinking, and to emphasize how his account of economic justice is much richer and also much more radical than his commons.”
“I particularly wanted to bring out some of the communitarian aspects of Rawls’ thinking, and to emphasize how his account of economic justice is much richer and also much more radical than his commons,” essentially his views on sharing of societal resources.
In Rawls’ view, economic justice involves more than how wealth is distributed, but about the balance of power between workers and business owners and the importance of having a sense of financial independence and opportunity.
A secondary goal was to respond to critics of classical liberalism, which embraces individual freedom as a primary value, and try to “rehabilitate” it as a progressive public philosophy.
“I think in popular discourse, particularly on the left, liberalism has come to be associated with the neoliberal ideas of thinkers like [Friedrich] Hayek and [Milton] Friedman,” with their faith in the markets and singular focus on economic growth.
That kind of thinking has come to dominate political discourse and economic policy since the 1980s, Chandler notes in his book. It has left progressives without a solid philosophical mooring for their thinking and policies.
Whereas President Ronald Reagan and Prime Minister Margaret Thatcher could look to Hayek and Friedman in the 1980s for a source of intellectual coherence and direction, it’s not as obvious today where progressives look for similar inspiration, he added.
“The long-term failure of mainstream progressive parties, like the Democrats here [in the U.S.] and the Labour Party in the U.K., to develop a coherent political vision of their own” is not merely an intellectual problem, but it deeply undermines the parties’ chances for ongoing electoral success, Chandler said.
Rawls, Chandler believes, offers a useful framework for weaving an array of different policies together in a coherent way and provides an intellectual and ethical clarity about why we should pursue progressive policy ideas like workplace democracy or universal basic income or various forms of participatory democratic politics.
Not everyone will agree with Chandler’s point of view, but for politicians and others looking for justifications of certain policies, that clarity could have significant practical value. “Being able to explain why we support different policies is important, and Rawls can help us do that.”
U.S. seems impossibly riven. What if we could start from scratch?
Christina Pazzanese
Harvard Staff Writer
6 min read
Key would be focusing on social, political, economic fairness, according to new book on ideas of political philosopher John Rawls
A new book by Daniel Chandler, “Free and Equal: A Manifesto for a Just Society,” offers a vision for democratic change inspired by the work of John Rawls, the towering political philosopher who joined the Harvard faculty in 1962 and maintained ties to the University until his death in 2002. Rawls’ 1971 magnum opus, “A Theory of Justice,” has influenced generations of philosophers and legal scholars.
In a conversation that has been edited for clarity and length, Chandler, an economist and philosopher at the London School of Economics who studied at Harvard under Nobel Prize-winning economist Amartya Sen, explained why Rawls’ ideas speak to the present day. This interview has been edited for clarity and length.
Which of Rawls’ principles offer the best framework for envisioning change?
The fundamental idea of Rawls’ philosophy is that society should be fair, and he developed a famous thought experiment called the “original position” for thinking about what that might actually mean. If we want to know what a fair society would look like, we should imagine how we would choose to organize it if we didn’t know what our position in that society would be, whether we would be rich or poor, gay or straight, Black or white, as behind what he called a “veil of ignorance.” It’s a very intuitive way to think about fairness, similar to the idea that someone might cut a cake more fairly if they didn’t know which piece they were going to end up getting.
“It’s a very intuitive way to think about fairness, similar to the idea that someone might cut a cake more fairly if they didn’t know which piece they were going to end up getting.”
He uses this thought experiment to identify two fundamental principles, to do with freedom and equality respectively — hence the title of my book — that we can then use to think about how to design the basic institutions of a democratic society: what the Constitution should look like, how to organize the political process, the broad outlines of our economic system, including the role of markets, the nature of property rights, the scope of government intervention, and so on.
Rawls’ first principle is what he called the “basic liberties” principle. That’s the idea that everyone is entitled to a set of truly fundamental freedoms, including both personal freedoms, such as freedom of speech, religion, sexuality, but also political freedoms — not just the right to vote, but all of the freedoms that we need to play a part as genuine equals in the political process.
His second principle has two parts. The first is what he called “fair equality of opportunity.” That’s not just the absence of discrimination but the idea that everyone should have a genuinely equal chance to develop and apply their talents and abilities in life. Equality of opportunity is sometimes seen as the less radical partner to equality of outcome, but it’s really a very demanding ideal, one that countries like America and the U.K. fall far short of today.
The second is the “difference principle” — the idea that we should organize our economy so that the least well-off are better off than they would be under any alternative economic system. So, some inequality can be justified because it’s necessary for markets to function well, and higher pay gives people incentives to work hard and innovate, but we need to make sure the benefits are widely shared, and that isn’t something we can just leave to markets.
One of the things that I think is most interesting and important about Rawls’ economic thinking, but has often been overlooked, is that when he’s talking about inequality and economic justice, he’s not only talking about the distribution of financial resources. He’s concerned with how our society distributes power and control, like the balance of power between owners and workers, and also what he calls the “social bases of self-respect,” which include having a sense of independence, of being able to stand on your own two feet, social recognition from our peers, and opportunities for meaningful work.
Historically, one critique of Rawls is that his ideas, while important, are not very pragmatic and don’t offer a roadmap for change. How might the kind of change you propose come to fruition?
I think that’s a fair criticism. You know, although Rawls is really the unrivaled giant of 20th-century political philosophy, his ideas haven’t had much impact on popular debate or public policy, at least not compared to Milton Friedman and F.A. Hayek. I think it’s this lack of practical application that helps explain this gap. I’ve tried to pick up where Rawls left off and flesh out how we could put his ideas into practice.
In terms of how change can happen, there aren’t easy answers. But we can be pretty certain that change won’t happen unless political parties and the people who shape our public debate are able to articulate a positive and unifying vision of where they want society. The starting point for making the quite deep changes that America needs, to both its political and economic institutions, is to be able to articulate that positive vision.
Some commentators have suggested a binary choice — liberal principles on one side, authoritarian ideas on the other — in the coming presidential election. You say we should reject that idea.
It’s not that I don’t think that choice exists; it’s that I think it’s possible and necessary to try to appeal to people across the spectrum. What I’m rejecting is the idea that society is divided into two fixed camps who can’t speak to one another anymore. It’s still the case that large majorities support liberal freedoms, the existence of a democratic political system, and an economy that is broadly market-based, but genuinely works for everybody. The divide is much more tied to party identification rather than to issues. So, despite how divided things seem right now, I think it’s possible to build a broad-based coalition around these kinds of ideas, and I hope my book provides people with the ideas and arguments to try to do that.
One of MIT’s missions is helping to solve the world’s greatest problems — with a large focus on one of the most pressing topics facing the world today, climate change.The MIT Energy and Climate Club, (MITEC) formerly known as the MIT Energy Club, has been working since 2004 to inform and educate the entire MIT community about this urgent issue and other related matters.
MITEC, one of the largest clubs on campus, has hundreds of active members from every major, including both undergraduate and graduate students. With a broad reach across the Institute, MITEC is the hub for thought leadership and relationship-building across campus.
The club’s co-presidents Laurențiu Anton, doctoral candidate in electrical engineering and computer science; Rosie Keller, an MBA student in the MIT Sloan School of Management; and Thomas Lee, doctoral candidate in the Institute for Data, Systems, and Society, say that faculty, staff, and alumni are also welcome to join and interact with the continuously growing club.
While they closely collaborate on all aspects of the club, each of the co-presidents has a focus area to support the student managing directors and vice presidents for several of the club’s committees. Keller oversees the External Relations, Social, Launchpad, and Energy and Climate Hackathon leadership teams. Lee supports the leadership team for next spring’s Energy Conference. He also assists the club treasurer on budget and finance and guides the industry Sponsorships team. Anton oversees marketing, community and education as well as the Energy and Climate Night and Energy and Climate Career Fair leadership teams.
“We think of MITEC as the umbrella of all things related to energy and climate on campus. Our goal is to share actionable information and not just have discussions. We work with other organizations on campus, including the MIT Environmental Solutions Initiative, to bring awareness,” says Anton. “Our Community and Education team is currently working with the MIT ESI [Environmental Solutions Initiative] to create an ecosystem map that we’re excited to produce for the MIT community.”
To share their knowledge and get more people interested in solving climate and energy problems, each year MITEC hosts a variety of events including the MIT Energy and Climate Night, the MIT Energy and Climate Hack, the MIT Energy and Climate Career Fair, and the MIT Energy Conference to be held next spring March 3-4. The club also offers students the opportunity to gain valuable work experience while engaging with top companies, such as Constellation Energy and GE Vernova, on real climate and energy issues through their Launchpad Program.
Founded in 2006, the annual MIT Energy Conference is the largest student-run conference in North America focused on energy and climate issues, where hundreds of participants gather every year with the CEOs, policymakers, investors, and scholars at the forefront of the global energy transition.
“The 2025 MIT Energy Conference’s theme is ‘Breakthrough to Deployment: Driving Climate Innovation to Market’ — which focuses on the importance of both cutting-edge research innovation as well as large-scale commercial deployment to successfully reach climate goals,” says Lee.
Anton notes that the first of four MITEC flagship events the MIT Energy and Climate Night. This research symposium that takes place every year in the fall at the MIT Museum will be held on Nov. 8. The club invites a select number of keynote speakers and several dozen student posters. Guests are allowed to walk around and engage with students, and in return students get practice showcasing their research. The club’s career fair will take place in the spring semester, shortly after Independent Activities Period.
MITEC also provides members opportunities to meet with companies that are working to improve the energy sector, which helps to slow down, as well as adapt to, the effects of climate change.
“We recently went to Provincetown and toured Eversource’s battery energy storage facility. This helped open doors for club members,” says Keller. “The Provincetown battery helps address grid reliability problems after extreme storms on Cape Cod — which speaks to energy’s connection to both the mitigation and adaptation aspects of climate change,” adds Lee.
“MITEC is also a great way to meet other students at MIT that you might not otherwise have a chance to,” says Keller.
“We’d always welcome more undergraduate students to join MITEC. There are lots of leadership opportunities within the club for them to take advantage of and build their resumes. We also have good and growing collaboration between different centers on campus such as the Sloan Sustainability Initiative and the MIT Energy Initiative. They support us with resources, introductions, and help amplify what we're doing. But students are the drivers of the club and set the agendas,” says Lee.
All three co-presidents are excited to hear that MIT President Sally Kornbluth wants to bring climate change solutions to the next level, and that she recently launched The Climate Project at MIT to kick off the Institute’s major new effort to accelerate and scale up climate change solutions.
“We look forward to connecting with the new directors of the Climate Project at MIT and Interim Vice President for Climate Change Richard Lester in the near future. We are eager to explore how MITEC can support and collaborate with the Climate Project at MIT,” says Anton.
Lee, Keller, and Anton want MITEC to continue fostering solutions to climate issues. They emphasized that while individual actions like bringing your own thermos, using public transportation, or recycling are necessary, there’s a bigger picture to consider. They encourage the MIT community to think critically about the infrastructure and extensive supply chains behind the products everyone uses daily.
“It’s not just about bringing a thermos; it’s also understanding the life cycle of that thermos, from production to disposal, and how our everyday choices are interconnected with global climate impacts,” says Anton.
“Everyone should get involved with this worldwide problem. We’d like to see more people think about how they can use their careers for change. To think how they can navigate the type of role they can play — whether it’s in finance or on the technical side. I think exploring what that looks like as a career is also a really interesting way of thinking about how to get involved with the problem,” says Keller.
“MITEC’s newsletter reaches more than 4,000 people. We’re grateful that so many people are interested in energy and climate change,” says Anton.
A growing portion of Americans who are struggling to pay for their household energy live in the South and Southwest, reflecting a climate-driven shift away from heating needs and toward air conditioning use, an MIT study finds.
The newly published research also reveals that a major U.S. federal program that provides energy subsidies to households, by assigning block grants to states, does not yet fully match these recent trends.
The work evaluates the “energy burden” on households, which reflects the percentage of income needed to pay for energy necessities, from 2015 to 2020. Households with an energy burden greater than 6 percent of income are considered to be in “energy poverty.” With climate change, rising temperatures are expected to add financial stress in the South, where air conditioning is increasingly needed. Meanwhile, milder winters are expected to reduce heating costs in some colder regions.
“From 2015 to 2020, there is an increase in burden generally, and you do also see this southern shift,” says Christopher Knittel, an MIT energy economist and co-author of a new paper detailing the study’s results. About federal aid, he adds, “When you compare the distribution of the energy burden to where the money is going, it’s not aligned too well.”
The authors are Carlos Batlle, a professor at Comillas University in Spain and a senior lecturer with the MIT Energy Initiative; Peter Heller SM ’24, a recent graduate of the MIT Technology and Policy Program; Knittel, the George P. Shultz Professor at the MIT Sloan School of Management and associate dean for climate and sustainability at MIT; and Tim Schittekatte, a senior lecturer at MIT Sloan.
A scorching decade
The study, which grew out of graduate research that Heller conducted at MIT, deploys a machine-learning estimation technique that the scholars applied to U.S. energy use data.
Specifically, the researchers took a sample of about 20,000 households from the U.S. Energy Information Administration’s Residential Energy Consumption Survey, which includes a wide variety of demographic characteristics about residents, along with building-type and geographic information. Then, using the U.S. Census Bureau’s American Community Survey data for 2015 and 2020, the research team estimated the average household energy burden for every census tract in the lower 48 states — 73,057 in 2015, and 84,414 in 2020.
That allowed the researchers to chart the changes in energy burden in recent years, including the shift toward a greater energy burden in southern states. In 2015, Maine, Mississippi, Arkansas, Vermont, and Alabama were the five states (ranked in descending order) with the highest energy burden across census bureau tracts. In 2020, that had shifted somewhat, with Maine and Vermont dropping on the list and southern states increasingly having a larger energy burden. That year, the top five states in descending order were Mississippi, Arkansas, Alabama, West Virginia, and Maine.
The data also reflect a urban-rural shift. In 2015, 23 percent of the census tracts where the average household is living in energy poverty were urban. That figure shrank to 14 percent by 2020.
All told, the data are consistent with the picture of a warming world, in which milder winters in the North, Northwest, and Mountain West require less heating fuel, while more extreme summer temperatures in the South require more air conditioning.
“Who’s going to be harmed most from climate change?” asks Knittel. “In the U.S., not surprisingly, it’s going to be the southern part of the U.S. And our study is confirming that, but also suggesting it’s the southern part of the U.S that’s least able to respond. If you’re already burdened, the burden’s growing.”
An evolution for LIHEAP?
In addition to identifying the shift in energy needs during the last decade, the study also illuminates a longer-term change in U.S. household energy needs, dating back to the 1980s. The researchers compared the present-day geography of U.S. energy burden to the help currently provided by the federal Low Income Home Energy Assistance Program (LIHEAP), which dates to 1981.
Federal aid for energy needs actually predates LIHEAP, but the current program was introduced in 1981, then updated in 1984 to include cooling needs such as air conditioning. When the formula was updated in 1984, two “hold harmless” clauses were also adopted, guaranteeing states a minimum amount of funding.
Still, LIHEAP’s parameters also predate the rise of temperatures over the last 40 years, and the current study shows that, compared to the current landscape of energy poverty, LIHEAP distributes relatively less of its funding to southern and southwestern states.
“The way Congress uses formulas set in the 1980s keeps funding distributions nearly the same as it was in the 1980s,” Heller observes. “Our paper illustrates the shift in need that has occurred over the decades since then.”
Currently, it would take a fourfold increase in LIHEAP to ensure that no U.S. household experiences energy poverty. But the researchers tested out a new funding design, which would help the worst-off households first, nationally, ensuring that no household would have an energy burden of greater than 20.3 percent.
“We think that’s probably the most equitable way to allocate the money, and by doing that, you now have a different amount of money that should go to each state, so that no one state is worse off than the others,” Knittel says.
And while the new distribution concept would require a certain amount of subsidy reallocation among states, it would be with the goal of helping all households avoid a certain level of energy poverty, across the country, at a time of changing climate, warming weather, and shifting energy needs in the U.S.
“We can optimize where we spend the money, and that optimization approach is an important thing to think about,” Knittel says.
This map estimates the average energy burden for U.S. households between 2015 and 2020. Households experiencing an energy burden in costs greater than 6 percent of income are classified as energy-poor. Darker shades indicate higher energy burdens, and grey areas indicate census tracts where the estimates are unavailable.
‘Harvard Thinking’: What skeptics get wrong about liberal arts
In podcast episode, an economist, an educator, and a philosopher make the case it’s as essential as ever in today’s job market
Samantha Laine Perfas
Harvard Staff Writer
long read
What is the point of college? It’s a question that many families and potential students think about when it comes to deciding the role of higher education in the future. This question is particularly pressing for those considering a liberal arts education.
“Why should some young person waking up to the world in late adolescence subject themselves to an education of four years or so that expects them to study a range of topics?” asked Susanna Siegel, the Edgar Pierce Professor of Philosophy, in this episode of “Harvard Thinking.” One reason is that it prepares future generations to think for themselves and contribute to a democratic society.
But it’s not just about critical thinking. David Deming, the Isabelle and Scott Black Professor of Political Economy who co-leads the College-to-Jobs Initiative at the Kennedy School, said it also sets students up for success in a constantly evolving workplace.
“Precisely because it is general, when it’s well-executed, [a liberal arts education] is teaching you not a set of specific competencies in some specific thing, but rather giving you a set of tools to teach you how to think about the next problem over the horizon,” Deming said.
Nancy Hill, the Charles Bigelow Professor of Education and a Developmental Psychologist in the Harvard Graduate School of Education, agrees that the liberal arts can enhance a student’s learning. However, she’s also seen how it can be perceived as a luxury education rather than the standard. Universities and other institutions should work to make this type of education more accessible.
“There isn’t a sense of freedom to explore a liberal arts education when people are concerned about the economy,” Hill said. “For people who are first-gen and people from low-income backgrounds, I want them to come into any college … and feel confident in taking courses that broadly expose them to ideas.”
In this episode, host Samantha Laine Perfas talks with Siegel, Deming, and Hill about why a liberal arts education matters — and how to make it attractive again.
Transcript
Nancy Hill: I think we haven’t made the liberal arts education attractive and innovated to make it attractive to people who are doing college in ways that we sometimes forget that that’s the majority of the ways in which people get their four-year degree. It’s overtime, it’s at night, it’s part-time. It’s almost as if the liberal arts education they’re experiencing is a luxury good, and we’re seeing it as an essential aspect of education.
Samantha Laine Perfas: The cost of college tuition is on the rise. Even with ramped-up financial aid efforts from universities, parents and students are still trying to decide whether or not tuition will lead to a smart return on their investment. Jobs increasingly require specific training or skill sets, leading some to question the value of a liberal arts education. Including fields like history, literature, and philosophy, the liberal arts have experienced diminishing enrollment numbers and institutions are trying to figure out their place in universities’ overall ecosystems.
So how do these institutions make a liberal arts education attractive again?
Welcome to Harvard Thinking, a podcast where the life of the mind meets everyday life. Today I’m joined by:
David Deming: David Deming, I am the Isabelle and Scott Black Professor of Political Economy at Harvard.
Laine Perfas: He studies education, inequality, and the future of work, and co-leads the College-to-Jobs Initiative at the Kennedy School. Then:
Hill: Nancy Hill, and I’m the Charles Bigelow Professor of Education and a developmental psychologist in the Harvard Graduate School of Education.
Laine Perfas: Nancy’s research focuses on parenting and adolescent development. And finally:
Susanna Siegel: Susanna Siegel. I’m the Edgar Pierce Professor of Philosophy in the Philosophy Department at Harvard.
Laine Perfas: She’s written a lot about perception, drawing on both philosophy and the sciences of the mind.
And I’m Samantha Laine Perfas, your host and a writer for the Harvard Gazette. Today, we’ll be taking a critical look at liberal arts education.
I think it’s important to start with, one, what is a liberal arts education and, two, how is it different than other types of higher education?
Siegel: Sometimes when people ask about what a liberal arts education is, and why it’s valuable, you can hear it as a kind of veiled or indirect way of asking what the values of the humanities are, because nobody asks, like, oh gosh, what’s the point of studying STEM fields? But I think there’s another way of hearing the question, which is not so much focused on the humanities, but a different question of like, why should some young person waking up to the world in late adolescence subject themselves to an education of four years or so that expects them to study a range of topics, to expose themselves to a range of different subjects and modes of thought, even while ultimately concentrating on one or two more than others? What’s the value of that plurality? My answer is I take very seriously the relationship between liberalism and liberal arts. I think that W.E.B. Du Bois said in his debate with Booker T. Washington, you know, education is a training for democratic citizenship and one thing we would lose if we lost liberal arts education is we’d become far more susceptible to politics of domination, to systems of governance that really rely on a very tight control over the horizon of ideas.
Hill: I would add to that, because I think when we think about a liberal arts education and when people get into the discourse of the value of a liberal arts education, people think about the price tag, and I think if we lost the liberal arts education, I agree with Susanna, we would lose some of the richness of what it means to live in a civil society, to think about culture, to think about innovation. If we think about colleges as training for vocation, and many people do want a return on investment of their college tuition, particularly as we’re seeing college tuitions raised at such an astronomical rate, people do want to know, what am I getting for it? And I think part of what we are losing as we think about the liberal arts education is really helping people understand its true value; that it might not be able to translate immediately to a job the way some undergraduate majors do, but we haven’t done a good job of saying, what value does it add to the kinds of careers that you might want? We’ve long since passed the stage where people start a job and stay in that job and work their way up and retire with the gold watch. We have to really think about preparing young people and preparing a workforce to really navigate a very creative and innovative career where they’re needed to reinvent themselves in relation to society, their interests, and their goals. And I think a liberal arts education provides that kind of intellectual flexibility and cognitive flexibility that society really needs.
Deming: Since I’m an economist, maybe I should craft an economic argument for what a liberal arts education does for people and how it differs from, let’s say more vocationally oriented training. So like why, if you want to be in finance, why not just major in finance and take a bunch of courses in finance? And the reason, which is something related to what Nancy said, is that people typically go to school at the very beginning of their lives and then whatever they learn has to last them for the next 40 to 50, God willing, years of life. And so the question is, what kind of education will prepare you not just for your first job, but for the rest of your life? The argument for a liberal arts education is the argument that precisely because it is general, when it’s well-executed, it is teaching you not a set of specific competencies in some specific thing, but rather giving you a set of tools to teach you how to think about the next problem over the horizon that we don’t have an answer to now because it hasn’t come around yet. And so that involves the ability to think critically. If someone’s telling me something, how likely is it to be true? What is their motivation? How much should I trust this source of information? How much should I think about somebody’s incentives to tell me the truth or not tell me the truth and how should I weigh evidence when perspectives are competing? Those are all things that are not really vocations. They’re really teaching you how to weigh evidence and how to think critically about other people’s perspectives and to adopt other people’s perspectives. And that’s a kind of teaching you how to think, teaching you how to learn tool kit that I think when done well, liberal arts education can do better than much more specific training for a specific job.
Hill: Yeah, I want to follow up on that because I think in some ways it’s really how we’re talking about what a liberal arts education is in the public discourse. And so I think it’s easier to sell and easier to help people understand the value of a liberal arts education when they’re attending a four-year institution in their early 20s, in the very typical graduate from high school, go to college, graduate in four or six years, as they say. But the vast majority of people who attend college don’t go to college that way. They might go to college part-time, and when you’re paying a single tuition and you can take as many courses as you want, then I think it’s easier to see the value of taking courses that are broad and around one’s area of interest. But if you have to pay for the courses one at a time, a credit at a time, and you’re doing this part-time and you’re seeing it very specifically to your career and upward mobility, I think we haven’t made the liberal arts education attractive and innovated to make it attractive to people who are doing college in ways that we sometimes forget that that’s the majority of the ways in which people get their four-year degree. It’s overtime, it’s at night, it’s part-time, It’s almost as if the liberal arts education they’re experiencing is a luxury good, and we’re seeing it as an essential aspect of education.
Laine Perfas: I hear all of you saying that the liberal arts focuses on critical thinking and analysis. Which is really valuable, but what I’m wondering is, is that enough in the current economy? Can it get people who are taking those classes in the jobs that then allow them to actually pay back the high cost they had to pay for the education in the first place?
Deming: There is no study, at least that I’m aware of, that shows that being a philosophy major makes you a better CEO or something like that. However, there is some evidence that the earnings gap between people who major in liberal arts and people who major in applied fields like computer science and engineering is much larger right after college than it is in adulthood. It’s because people who major in liberal arts tend to catch up. So it suggests that the penalty you think you’re paying for majoring in liberal arts is much less over your lifespan than it appears when you first graduate.
I think it’s important when we think about the landscape of higher education currently to hold two ideas that seem contradictory, but are actually not, in our minds at the same time. So one is that colleges can be doing much better than they currently are. We could be delivering much more value for money and we haven’t adapted to meet the needs of the workforce today. And then at the same time, another thing that’s true is that by and large, most people who go to college get a good return on their investment. That’s accounting for their tuition costs and the opportunity costs of their time; that despite all of its warts it still ends up being a good bet for most people. Now it’s risky and it takes some time to pay off and people get upset because they’re paying quite a lot and the benefits take a long time to be realized and they don’t always happen for everybody, but if you just look at, you know, putting your money in the stock market rather than paying for tuition or investing in other social interventions like public health campaigns or universal basic income or pick your policy, the return on investment in an economic sense for, hey, instead I’m going to park my money into four years of tuition, it tends to be a good investment just in a dollars-and-cents terms. So I don’t think you have to choose between a liberal arts education and a vocational trade. I think there’s lots of ways that we can deliver the breadth and the depth of a liberal arts and sciences education while also doing it in the context that gives people the skills to work in a team, to understand each other, to think critically. I just think there’s so much opportunity that we’re really not picking up.
Hill: David, can I ask you a question about that? It used to be that many companies would hire graduates with any major. In fact, they wanted majors that were adjacent to their field. And then they figured we can train them up on what we need them to do. But it seems now that businesses aren’t really valuing that kind of broad exposure in the ways that they used to. What are you seeing?
Deming: I think they still are hiring those people. Maybe they’re not doing it as much and maybe they’re a bit less happy about it. So it’s always important to look at people’s behavior, not what they say. So if you read The New York Times or The Wall Street Journal, The Boston Globe, you’re going to hear a lot of think pieces about how college degrees aren’t that useful. But then if you look at the employment numbers, it turns out that like actually entry-level college graduates, they’re not doing as well as they were in the boom times of the ’90s, but there’s a lot more of them now than there used to be. So it’s not enough to just have a college degree. Now they want you to have an internship and they want you to have done something more relevant. So I think of it more as like a gradual ratcheting up of the standards for getting a quote-unquote good job, which I think makes the case more for the relevance of college.
Hill: But when I think about, as we think about how higher education has changed, now many more high school students are going to college. We’re up to 60 percent of high school students go directly into some form of college, but we’re still only at 37 percent of the workforce having a four-year college degree. So we’re getting this kind of drop-off of people who aren’t finishing. And so we have more than half of the workforce that are figuring out how to make a living wage without a college degree. And I want to make the case that this kind of broad investment in a liberal arts education is useful to everyone.
Deming: I agree with that. I know that we’re here talking about liberal arts education in particular, but we tend to argue a lot about how education should look, but I think the first sort of thing is people need to be more educated. Like, there needs to be more of all of it. And that’s not because I’m in the education field. This is not a self-interested commentary. I’m saying this because education is one of those things that as society becomes richer and more prosperous, and as technology changes our possibilities, we want more of it, not less of it, because the level of sophistication you need to be a productive employee is just increasing over time. And a big worry I have is all the negative vibes around higher education is obscuring this reality and that we’re just not doing enough to educate the next generation of young people for the demands of the economy.
Siegel: Can I introduce a slightly different perspective on some of these questions?
Laine Perfas: Yeah, go for it, Susanna.
Siegel: There’s a way of talking about these questions as we’ve been doing, which is a very natural way of thinking about it in terms of the individual consumer choice. But we could also think about it in terms of the social incentives to offer liberal arts education, and I think some of those incentives might not be visible from the point of view of the kid or of the parent. They really are at the level of principles for organizing society.
I like this idea of organizational intelligence. Anybody who’s ever been in any kind of organization of any size knows the complexities involved in trying to gather all the information that’s distributed across different people’s perspectives in a cooperative scheme. And you can run your organizations in a way that relies very much on brute force. But, you know, people don’t like to be bossed around. If you’re going to have any kind of organization that isn’t just a very top-down, brute-force sort of thing, at any level of life, you need a set of habits where you have practiced encountering other people’s perspectives. Because the habits of interaction you develop in the classroom or meeting people who are coming from who knows where, those skills, those habits, that openness, that will serve you extremely well in organizations. And you could call that a kind of individual skill of critical thinking, but I actually think it’s more than that. I actually think it’s far more relational. That’s the value of liberal arts education, and that’s why it should really matter to society.
Laine Perfas: So I do want to say one thing. When we were considering doing this episode, we did realize that we might run the risk of sounding very biased because of the people who are participating in this episode. All of us are in higher education. All of us are college-educated. So given that, I do want to put on the hat of a skeptic. One thing that I saw in preparing for this episode is that there’s a Gallup poll last year that showed only 36 percent of people right now have confidence in higher education, period. So my question is, since you are all people who have seen the benefits and can clearly see the value, is the problem that we’re just failing to communicate that value or are these institutions failing to deliver?
Siegel: If you sort of look at public opinion and measure people’s reactions to higher education and say, gosh, I’m losing faith in them, it has to be taken into account that this sampling of public opinion is taking place in an era where the university is being targeted for specifically political reasons. And it’s not just the United States, it’s all over the world. Of course, it’s always a good opportunity to ask the questions we’re asking about what are we doing, what principles should guide it, what is its value, and I think those are very important conversations to have, but I think we should be wary of looking at any sort of results we might have as public opinion as if they were somehow tracking what’s going on inside of the universities.
Hill: I want to say a couple of things in response to that. And I think one is if we know our history, which is part of the liberal arts, if we know our history, we’ll remember that public education began in part to ensure that the electorate was informed. And so this idea of connection between education and literacy and exposure to ideas and democratic participation is you know, embedded in who we are as a country here in the United States. I do agree it’s essential to a society, but if it’s so essential, maybe we shouldn’t charge so much for it. And this is back to the return on the investment, that if people are thinking about how do I make a living? — and in my research with adolescents as they’re making these decisions to go to college or not, they’re thinking about the economy. They’re thinking about the likelihood that they’re going to be able to get a job. They’re thinking about whether or not they see the economy as unstable. And so in our research, students who think that the economy is unstable, they disengage. They don’t dig in the way we think about from human capitalist theories that, you know, if the economy is bad, I’m going to go back to school. They disengage because they’re not sure that their efforts will pay off. And so then we say, how about a liberal arts education? And we haven’t made it connect to their sense of insecurity about their future. And some of my colleagues’ work show that when parents think that the economy is insecure, they become much more controlling. They become much more concerned about linking education to a future job. And so there isn’t a sense of freedom to explore a liberal arts education when people are concerned about the economy.
Deming: We’ve been talking also about the fact that more and more people are going to college and people see it as an economic necessity, and yet they’re losing confidence in it. So I take that as a political question, which is, why is it that people are so upset with higher ed or they don’t trust institutions of higher education to deliver on what they want? Which isn’t just an economic thing because clearly they’re still going, so in dollar-and-cents terms they still think it makes sense, but it’s really a question of politics and values. And so I think it’s fair, you know, for the public to say, I don’t necessarily trust an institution. We should listen to that if a bunch of people say, look, I know I have to send my kids to college but I don’t trust what’s going to happen with them when they get there. As a society and certainly not as an institution like Harvard, that wants to be seen positively by the public, I don’t think we can afford to just explain that away. I think we have to deal with it.
Laine Perfas: I want to return to something Nancy said at the beginning of the conversation and that’s that sometimes college in general, but specifically a liberal arts education, can feel like a luxury. I know for myself, I’m a first-generation college student and it was a hard road getting to college, graduating as a first-gen student without a clear roadmap, and I would love to hear all of your thoughts on how socioeconomic status, how race, class, all of these other parts of who we are, how they give us a different perspective on this issue.
Deming: It’s a loaded question. You know, Harvard educates the scions of wealthy families and titans of industry and also produces the world’s greatest scientists that develop knowledge and create vaccines that save millions of people. We contain multitudes in that way, and I think we have to recognize that in the way we talk about ourselves and in the criticisms that we receive from outside of the academy.
Siegel: We started off talking about liberal arts education in general in this sort of abstract level that I thought was very helpful. And now suddenly we’re talking about our own specific institution but, you know, if we’re asking about liberal arts in general, I don’t know if we find the kind of performative contradiction that David’s trying to point to when he says, OK, if you’re not part of the elite when you come here, you might well become it when you leave. Just because of the channels that are here, indeed, that’s why many people want to come here so that they can do that. But that’s just us, that’s nothing to do with liberal arts education in general, like Nancy was saying, and I completely agree. We can find the mode of liberal arts education in all kinds of places. Liberal arts education does not equal Ivy League education.
Deming: I do think there’s a sense in which this is the spirit of Sam’s question that the idea of studying liberal arts is a luxury for people who come from privilege, and who are able to go into jobs that are going to make them masters of the universe, so to speak, when they leave a place like Harvard. I do think if you look at the distribution of majors across colleges, liberal arts majors are more common at elite universities than non-elite universities because people who go to these non-elite universities typically go there because they want to get a good job.
Siegel: Yeah, fair enough. I guess there’s a kind of empirical and a principled way of thinking about these questions, and they’re both valuable ways of thinking about them. I was focused on the principled thing of, what would be lost with, from a liberal arts education, just abstracted from its social realization.
Hill: Yeah, I want to come back to Sam’s point about the socioeconomic status part of this. And for people who are first-gen and people from low-income backgrounds, I want them to come in to any college, whether it’s a state college or two-year college or Harvard, and feel confident in taking courses that broadly expose them to ideas, to history, to science, to literature, and broaden their thinking. But for some, that feels risky, that they’ve got one shot. They might be the first person in their family. They might be the first person in their community, and it feels like a lot of pressure, and in our focus groups with teens who are first-generation students it feels like a lot of pressure, that they have to deliver on this opportunity that’s like winning the lottery ticket to be able to go to college. And then they have to turn around and come home and make the case that they’re majoring in philosophy, and I want them to major in philosophy and history and literature and to become thinkers. But we have to give them the language that they can go back home and say, this is why I should be studying this instead of a business degree.
Siegel: Yeah, what we always say to students, because as you can imagine, it comes up a lot in the philosophy department, where I will note that our enrollments have only grown over the past 10 years — we don’t have a crisis of the humanities in the philosophy department. We’re getting more and more concentrators all the time. And it’s because we made this concerted effort to do a lot more outreach and to explain exactly these things that you’re asking about. And one of the things we say is we talk about the communication skills that you get from being able to analyze arguments and rehearse them and put your own reactions in parentheses while you consider opposite and opposing reactions and then be able to become articulate on the page in writing about the relationship between those perspectives. The general communication skills are extremely useful. So none of this is as highfalutin’ as I was talking about before, though I stand by every highfalutin’ thing I said but in terms, if you want to actually communicate to the person, you know, right here now on the ground and say, what can I tell my parents? This is what we suggest that they tell their parents. And it seems to work because our enrollments are growing.
Laine Perfas: Before we close. I wanted to spend a little bit of time thinking about: What is the path forward for liberal arts, either in communicating its current value or evolving to better meet the needs of students today?
Hill: So often we think about [how] the college degree leads to upward mobility. We’re often, whether we say it or not, thinking in economic terms and career trajectory terms. But what does it mean to live a fulfilling life? And when I think about just the rise in mental health disorders, loneliness, depression, anxiety, and all of these things that come from the difficulties we have in connecting with each other and building community with each other, engaging people whose views are different from our own, and being willing to change our mind. And all of those things have impacts on our physical health and our mental health and well-being. And the kind of broad liberal arts education is going to enable us to connect to people across cultures, societies, backgrounds. It’s going to give us skills to deconstruct our identities and reinvent ourselves in new ways. We talk about it in our research. We see how youth come to college. They leave their home communities and they take a germ of themselves with them from those home communities and they let a lot of it go and then they reinvent themselves. We used to think people find themselves in college. They don’t find themselves in college. They learn how to reinvent themselves and rediscover themselves. And I think that is quintessential to the liberal arts education.
Deming: One thing we haven’t talked about at all is technology and how the classroom is, I think, changing in response to technology. It’s not like people couldn’t cheat on their assignments before generative AI. But now it’s just much, much easier And I think what that really says to me is that we really need to rethink how we develop student skills in the classroom through assignments. And I think there’s a real opportunity to do that. I think the liberal arts could lead the charge. Let’s design classroom-based assessments that help them do that directly. Let’s not just give them an essay and then grade it. Let’s have them give a presentation. Let’s make them try to convince somebody of a different point of view. Let’s have them work together with people and then have their peers grade how much they learned from them and things like that. I think there’s a ton of opportunity specifically in the liberal arts to design a classroom environment that is more engaging, is more adapted to new technologies, and is intellectually rigorous. I think my experience is students really respond when you do hold them to high intellectual standards. And so I think there’s an opportunity for not just the liberal arts, but anybody who really wants students to engage critically with things, to redesign their classroom in a way that’s both technologically savvy and engaging for students in this age of AI.
Hill: I totally agree. I think gone are the days of: Write an essay. And I think here are the days where we work on projects and ideas together in teams, convincing each other, debates, deliverables that apply knowledge. I love the idea of technology because I see it as an accelerator. It enables us to move to the next level more quickly. It accelerates our ability to digest and acknowledge and to think critically and to get all the ideas on the table so that we can really do what humans do best.
Siegel: I guess I will put in a good word for the essay. You can have one of your assignments be to write an essay. That’s not at all at odds with a lot of cooperative work or working in groups. I think definitely a both-and situation, when it comes to writing things. I think that’s actually a very important skill I wouldn’t want to lose. But absolutely the model where you’re just kind of, let me impart my information to you, that’s what I would leave behind, you know, hist and lit. The institution, and the classroom, it needs those students. We need everybody, it enriches the institution to have people there. That’s a very powerful message that I think can be empowering for the students as well.
Deming: Thank you all for joining me today. My pleasure.
Siegel: It’s a great conversation. Thank you.
Laine Perfas: Thanks for listening. For a transcript of this episode and to find links to all of our other episodes, visit harvard.edu/thinking. This episode was hosted and produced by me, Samantha Laine Perfas. It was edited by Ryan Mulcahy, Simona Covel, and Paul Makishima, with additional editing and production support from Sarah Lamodi. Original music and sound designed by Noel Flatt. Produced by Harvard University, copyright 2024.
The University of Oxford has topped the Times Higher Education world ranking for Computer Science for the seventh consecutive year, in their newly released subject tables.
At a meeting of the Faculty of Arts and Sciences on Oct. 1, 2024, the following tribute to the life and service of the late Robert Rosenthal was spread upon the permanent records of the Faculty.
Robert Rosenthal, one of the most influential psychologists of the past 60 years, died on Jan. 5, 2024, in Riverside, California. He was 90 years old. Rosenthal conducted landmark social psychology experiments on interpersonal expectancy effects, which showed that people can unwittingly convey how they expect others to behave, thereby subtly inducing them to act in accordance with expectation. Such effects, he found, occur with teachers, supervisors, and psychotherapists with their pupils, employees, and patients. He was also a major contributor to statistical and methodological advances in the behavioral sciences.
Rosenthal was born in Giessen, Germany, on March 2, 1933, and was the son of Hermine (Kahn) and Julius Rosenthal. He spent his early years in the town of Limburg, where his father co-owned a dry goods factory. The factory was seized by the Nazis in 1938, and the Rosenthal family fled to Cologne, seeking to conceal their Jewish identity in the anonymity of the big city. Although they had obtained a quota number enabling them to immigrate to America, someone stole it. Fortunately, Julius Rosenthal’s brothers were living in the British colony of Southern Rhodesia. They helped the family escape to Africa, obtain a visa, and then settle in New York City in 1940.
Julius Rosenthal moved his family from Queens to Los Angeles, where he opened a department store and where Robert Rosenthal finished his final year of high school. Robert Rosenthal then enrolled at the University of California, Los Angeles, where he majored in psychology. He received his B.A. in 1953 while also taking graduate courses in clinical psychology, enabling him to obtain his Ph.D. in 1956. After completing his clinical training, he was appointed Assistant Professor, then Associate Professor, and became the Director of clinical training in the Ph.D. program in clinical psychology at the University of North Dakota, where he taught for five years. He spent the next 37 years at Harvard University, first as a lecturer on clinical psychology and then as a professor of social psychology. He chaired the Department of Psychology (1992–1995) and was appointed Edgar Pierce Professor of Psychology (1995–1999). He retired from Harvard and began working for the University of California, Riverside, in 1999, where he was Distinguished Professor and University Professor until his retirement in 2018.
Although trained as a clinician, Rosenthal’s research soon evolved in the direction of social psychology. In the 1960s, he and Kermit Fode discovered the Experimenter Bias Effect. In one study, he informed student experimenters that one group of rats had been bred to be very proficient at learning to navigate mazes, whereas another group had not. Although both groups were indistinguishable, the “maze-bright” rats outperformed the “maze-dull” ones, apparently because students unintentionally handled them especially well. Rosenthal’s work on the Experimenter Bias Effect encouraged psychologists to conduct their experiments in a double-blind fashion whereby neither the subjects nor the assistants testing them were aware of the hypothesis under test.
Pygmalion in the Classroom was among his most famous experiments, conducted with Lenore Jacobson, an elementary school principal. They told teachers that the (bogus!) Harvard Test of Inflected Acquisition had revealed that certain students, but not others, were about to exhibit a growth spurt in measurable intelligence over the course of the school year. Follow-up intelligence tests indicated that students whose teachers expected intellectual growth did, in fact, exhibit an increase in measured IQ greater than did students whose teachers had no such expectation of them. Apparently, teachers favorably interacted with the students whom they expected to excel, thereby encouraging the very progress supposedly forecast by the test.
Rosenthal also made many contributions to the related field of nonverbal behavior. In 1993 Nalini Ambady and Rosenthal won the American Association for the Advancement of Science’s Prize for Behavioral Science Research for their work on thin slices of behavior. They found that brief — usually less than one minute — slices of audiotaped or videotaped expressive behavior (e.g., manners of speaking, gestures, and facial expressions) enabled raters to make judgments regarding a person’s competence, likability, and other attributes that were as accurate at predicting, for example, a person’s success as a teacher as comprehensive student evaluations. Brief audiotapes of physicians interacting with their patients distinguished those who had been sued versus those who had not.
In parallel with his work in social psychology, Rosenthal published many articles and books on statistics and methodology. Often collaborating with Donald B. Rubin or Ralph L. Rosnow, he was instrumental in developing meta-analysis, a procedure for summarizing the results of many studies. As co-chair of the American Psychological Association’s (APA) Task Force on Statistical Inference, he helped shape guidelines for best practices, such as focused contrast analyses and effect size estimation.
Rosenthal was the recipient of many honors, including the Distinguished Scientist Award from the Society for Experimental Social Psychology (1996), the James McKeen Cattell Fellow Award from the American Psychological Society (2001), and the Samuel J. Messick Award for Distinguished Scientific Contribution from the APA’s Division 5 (Quantitative and Qualitative Methods) (2002). He received an honorary doctorate from the University of Giessen in 2003, 70 years after he was born at the university’s medical center.
Rosenthal’s wife of 59 years, Marylu (Clayton), passed away in 2010. He is survived by his daughters, Virginia (Ginny) Rosenthal Mahasin and Roberta Rosenthal Hawkins; his son, David Clayton Rosenthal; and six grandchildren.
Rosenthal was a wonderful colleague and the beloved mentor of many students. His personal warmth, kindness, and ever-smiling, down-to-earth manner were as memorable as his curiosity, enthusiasm, and brilliance.
Respectfully submitted,
Jill M. Hooley Ellen J. Langer Mark F. Lenzenweger (Binghamton University) Donald B. Rubin Richard J. McNally, Chair
At a meeting of the Faculty of Arts and Sciences on Oct. 1, 2024, the following tribute to the life and service of the late David Gordon Mitten was spread upon the permanent records of the Faculty.
Have you seen the Attic black-figure vase in the Harvard Art Museums depicting Herakles playing the kithara and coping with his lionskin at the same time? Or the early Byzantine weighing machine, with its bust-shaped weight depicting an empress? Or the medallion portrait of the comic playwright Menander, uniquely inscribed in antiquity with his name and, therefore, invaluable for identifying uninscribed copies elsewhere? If so, you owe the fascination of seeing these objects at Harvard to David Gordon Mitten, curator, teacher, and archaeologist.
Mitten was born in Youngstown, Ohio. New World archaeology at the University of New Mexico and in the Smithsonian Institution’s River Basin Surveys provided his early training in fieldwork. In 1957 he was awarded his B.A. in Classics at Oberlin College and, in 1962, his Ph.D. in Classical Archaeology at Harvard, where he spent the rest of his career. His participation in the University of Chicago’s Isthmia excavations in Greece yielded his doctoral dissertation on the terracotta figurines from the Isthmian sanctuary of Poseidon, which he wrote under the direction of George Hanfmann. As an excavator, Mitten’s most significant contribution was the discovery of a synagogue from the Roman era at Sardis, the capital of the fabled kingdom of Lydia in western Anatolia, where the Harvard–Cornell Archaeological Exploration of Sardis started annual excavation in 1958. He was an associate director of the Sardis excavation for 40 years.
Upon completion of his Ph.D., Mitten was appointed Instructor in the Fine Arts and, in 1964, Francis Jones Assistant Professor of Classical Art. In 1968 he was appointed associate professor with tenure (a short-lived concept at Harvard), and, the following year, he received a full professorship as the James Loeb Professor of Classical Art and Archaeology in the Department of the Classics. In 1974, succeeding his teacher and mentor, he was appointed Curator of Ancient Art; in 1996, to his humble delight, the position was endowed as the George M. A. Hanfmann Curatorship of Ancient Art.
Until his retirement from the curatorship in 2005, an occasion on which he was celebrated at an international symposium of friends, colleagues, and former students, Mitten acquired a rich array of ancient objects in all media for Harvard, especially bronzes and coins but also marble sculptures and pottery. In 2010 he retired from his professorial chair, having taught generations of students — in the Harvard Divinity School, where he offered a renowned seminar on the archaeology of the New Testament with Helmut Koester; the Division of Continuing Education; the Graduate School of Arts and Sciences; and, with special enthusiasm, Harvard College, where his course Images of Alexander the Great was a legend in the Core Curriculum.
Mitten’s major contributions in print comprise two catalogs, “Master Bronzes from the Classical World,” co-edited with Suzannah Doeringer and published to accompany an exhibition that traveled from Harvard in 1967–68 to the City Art Museum of St. Louis and the Los Angeles County Museum of Art, and “The Gods Delight: The Human Figure in Classical Bronze,” co-edited with Arielle P. Kozloff and published to accompany an exhibition that traveled from the Cleveland Museum of Art in 1988–89 to, once again, the Los Angeles County Museum and then to the Museum of Fine Arts in Boston. His special interest in bronzes left Harvard a legacy of important acquisitions, including small objects of personal attire (fibulae, the ancient equivalent of the safety pin) and some larger pieces ranging from elegant Picasso-esque figures of the Greek Geometric period to a Roman statuette of a goddess wearing a bird-shaped headdress.
Mitten’s love of ancient objects benefited not only the Harvard Art Museums, through 30 years of passionate curatorial acquisition, but also generations of students, whom he charmed and enthused with hands-on demonstration of the intricacies of craftsmanship residing in the humblest of objects. Far from protecting these pieces from the hazards of human touch, he would hand round gloves at the beginning of every class and teach his students how handling an object is key to understanding it. He believed in the capacity of fragments to tease students’ imagination and test their connoisseurship, and, alongside glamorous purchases like an Etruscan black-figure amphora depicting the ambush of the Trojan hero Troilus by Achilles, he created a valuable teaching tool by gathering ostensibly trivial fragments of red- and black-figure pottery. Intermittently during his career, he published articles on objects in Harvard’s collections, thereby making them known beyond the confines of the Yard.
Just as Mitten could make an inert object come alive as he cradled it in the palm of his hand or held it up to the light to illustrate a particular swirl of drapery or carefully shaped lock of hair, so, too, could he enthrall an audience with tales of excavating at Sardis or a detective story tracing the pedigree of a new acquisition. He took an intense interest in other people, both their dreams and their challenges, and sought opportunities to further their ambitions. The legacy of his passion for ancient coins was secured with the appointment of Harvard’s first curator of coins, Dr. Carmen Arnold-Biucchi, in 2002. Students of the civilizations adjacent to Greece and Rome also benefited from acquisitions that he made, such as two groups of cylinder seals intricately carved in Mesopotamia long before the Greeks and Romans came to dominate the Mediterranean world.
Mitten was awarded a Guggenheim Foundation Fellowship in 1976, the Petra Shattuck Teaching Prize from the Harvard Extension School in 1988, and the Phi Beta Kappa Prize for Excellence in Teaching in 1993. In 2009 he received the Faculty of the Year Award from the Harvard Foundation. Having accepted Islam during his excavations at Sardis in 1969, he became a faculty advisor to the Harvard Islamic Society; he practiced Sufism and frequently delivered the homily at Morning Prayers in Appleton Chapel. He is survived by his wife Heather Barney; two daughters from his first marriage, Claudia Hon and Eleanor Mitten; his stepdaughter, Sophia Barney-Farrar; four grandchildren; and two great-grandchildren.
At a meeting of the Faculty of Arts and Sciences on Oct. 1, 2024, the following tribute to the life and service of the late Daniel Albright was spread upon the permanent records of the Faculty.
Daniel Albright, the Ernest Bernbaum Professor of Literature at Harvard from 2003 until his untimely death at the age of 69, was a prolific and ingenious analyst of literary modernism, lyric poetry, the challenging early 20th-century intersections of music, science, literature, and art, and the larger theory of aestheticism across the arts. His 16 books, along with his amusing and wide-ranging lecture courses on modernism at Harvard and his always warm and lively conversational manner, defined a place where learning, whimsy, a photographic memory, amusing side glances, a drawling eloquence, mischievous formulations, exact timing, and theatrical pauses delighted his listeners and readers while offering a precise and detailed analytic account of those moments of aesthetic experience that define our personal and exhilarating encounters with works of art.
Born in Chicago, Illinois, on Oct. 29, 1945, Albright attended Rice University, where he majored in mathematics until he switched abruptly to English literature. At Yale University, he completed his Ph.D. in three years with a thesis on the poetry of William Butler Yeats. His first book, “The Myth Against Myth: A Study of Yeats’s Imagination in Old Age,” was published at once by Oxford University Press. After Yale, Albright taught for 17 years at the University of Virginia, where he was promoted to full professor and published five books on lyric theory, Yeats, Tennyson, and the modernism of Thomas Mann, Beckett, Nabokov, Schoenberg, and Woolf. While at Virginia, Albright married Karin Larson, with whom he had a son, Christopher.
In the middle third of his academic career, Albright taught for 16 years at the University of Rochester with an affiliate appointment in musicology at the Eastman School of Music. While at Rochester, his many publications on music and the relations between early 20th-century modernism in music and literature began his search for a wider aesthetics of modernism across the arts.
Once at Harvard, the final third of Albright’s career unfolded. In his later books, he developed a theory that he called panaesthetics, a challenge to the notion that each art has not only a specific medium but also unique limits and central preoccupations. Albright’s popular General Education course, Putting Modernism Together, reflected the expanding vision of this phase of his career.
In Albright’s many books, lectures, and articles, he developed a broad interdisciplinary account of modernism, while setting that interest within his account of a lyric tradition that includes Yeats, Tennyson, and the larger theory of lyric poetry. He situated literary modernism within early 20th-century music and science, above all within music because both its lyricism and its experiments with form provided strong analogues with literary modernism.
As a literary and musical interpreter, Albright is characterized by a demonic attentiveness, by learning, by a fanciful mind, and by brilliant writing. Drawn to the artistic extremity of Schoenberg, Beckett, and Nabokov or the collaborative work of Gertrude Stein, Albright took modernism to require, in part, difficulty, and he saw that it involved elaborate and playful engagement with language. His was an empirical, speculative, text-based criticism.
Albright’s first book on Yeats’s imagination in old age could not, with all its attention to the language of poetry and myth, have predicted his second, two-part project on modernism: “Personality and Impersonality: Lawrence, Woolf, Mann” and “Representation and the Imagination: Beckett, Kafka, Nabokov, and Schoenberg.” Each of these books set out to isolate a core feature of representation within modernity. In “Representation,” Albright, in his always paradoxical way, worked out the costs of creating an extreme fictive world, an abstract world established by means of the details of the real, a collapsing project always pushed too far in order to work at all. The earlier book on expression and personality started from the opposite direction, studying fictive worlds that express the writer’s personality and biography by means of the central figure of the artist. Here too Albright worked in the direction of paradox since each of these three careers required a swerve into the abstract, the allegorical, and the impersonal after a certain exhaustion of the artistic resources of personality.
Albright’s work in the second half of his career moved this ambition to a larger terrain. Literary modernism itself is now configured and expressed through the competing and companionable modernisms of art, music, and science. These ambitions define Albright’s two major mid-career books “Quantum Poetics: Yeats, Pound, Eliot, and the Science of Modernism” and “Untwisting the Serpent,” his first attempts at large-scale aesthetics of modernism across the arts. Albright resisted Lessing’s strong argument for the separation of powers within the different domains of art in order to isolate, for our attention, small-scale shared aesthetic features common in the 20th century to music, drama, poetry, and opera. Gestus, or gesture, is one of those small aesthetic units. Opera — with its ambition to unify all of the arts within a performance where words, music, gesture, presence, story, and the visual effects of costume and spectacle are all drawn together — is certainly at the heart of Albright’s idea of the dream of modernism.
From Albright’s earliest work on Yeats to his all-embracing final work on panaesthetics, his career was that of a pathbreaker who moved on to ever newer enlargements of the domain within which we pose our questions about the central works and artists of modernism and its aftermath.
Albright was named a National Endowment for the Humanities fellow in 1973, a Guggenheim Fellow in 1976, and a Nina Maria Gorrissen Fellow at the American Academy in Berlin in 2012. He died suddenly on Jan. 3, 2015, in Cambridge, Massachusetts. He is survived by his domestic partner, Marta S. Rivera Monclova; his son, Christopher Albright; and his ex-wife, Karin.
Albright will be remembered for a style that is that of an aesthete: savoring words, ideas, juxtapositions, and the discoveries of his own brilliant mind and polymath learning. Uniqueness, a rich, high style, a challenging comedy of intellect in the manner of Nabokov, a seriousness about beauty and invention — these are the traits that defined Albright’s intellectual performance.
Respectfully submitted,
John T. Hamilton Christopher Hasty Elaine Scarry Philip Fisher, Chair
Two grants, up to $25,000 each, will be awarded for research in the life sciences to Cornell faculty who enhance the diversity, equity and inclusion goals of the university.
The collaboration, which includes researchers from the University of Cambridge, aims to accelerate progress on new neuro-technologies, including miniaturised brain implants designed to treat depression, dementia, chronic pain, epilepsy and injuries to the nervous system.
Neurological and mental health disorders will affect four in every five people in their lifetimes, and present a greater overall health burden than cancer and cardiovascular disease combined. For example, 28 million people in the UK are living with chronic pain and 1.3 million people with traumatic brain injury.
Neuro-technology – where technology is used to control the nervous system - has the potential to deliver new treatments for these disorders, in much the same way that heart pacemakers, cochlear implants and spinal implants have transformed medicine in recent decades.
The technology can be in the form of electronic brain implants that reset abnormal brain activity or help deliver targeted drugs more effectively, brain-computer interfaces that control prosthetic limbs, or technologies that train the patient’s own cells to fight disease. ARIA’s Scalable Neural Interfaces opportunity space is exploring ways to make the technology more precise, less invasive, and applicable to a broader range of diseases.
Currently, an implant can only interact with large groups of neurons, the cells that transmit information around the brain. Building devices that interact with single neurons will mean a more accurate treatment. Neuro-technologies also have the potential to treat autoimmune disorders, including rheumatoid arthritis, Crohn’s disease and type-1 diabetes.
The science of building technology small enough, precise enough and cheap enough to make a global impact requires an environment where the best minds from across the UK can collaborate, dream up radical, risky ideas and test them without fear of failure.
Professor George Malliaras from the University of Cambridge’s Department of Engineering is one of the project leaders. “Miniaturised devices have the potential to change the lives of millions of people currently suffering from neurological conditions and diseases where drugs have no effect,” he said. “But we are working at the very edge of what is possible in medicine, and it is hard to find the support and funding to try radical, new things. That is why the partnership with ARIA is so exhilarating, because it is giving brilliant people the tools to turn their original ideas into commercially viable devices that are cheap enough to have a global impact.”
Cambridge’s partnership with ARIA will create a home for original thinkers who are struggling to find the funding, space and mentoring needed to stress-test their radical ideas. The three-year partnership is made up of two programmes:
The Fellowship Programme (up to 18 fellowships)
Blue Sky Fellows – a UK-wide offer - we will search the UK for people from any background, with a radical idea in this field and the plan and personal skills to develop it. The best people will be offered a fellowship with the funding to test their ideas in Cambridge rapidly. These Blue Sky Fellows will receive mentorship from our best medical, scientific and business experts and potentially be offered accommodation at a Cambridge college. We will be looking for a specific type of person to be a Blue Sky Fellow. They must be the kind of character who thinks at the very edge of the possible, who doesn’t fear failure, and whose ideas have the potential to change billions of lives, yet would struggle to find funding from existing sources. Not people who think outside the box, more people who don’t see a box at all.
Activator Fellows - a UK-wide offer - those who have already proved that their idea can work, yet need support to turn it into a business, will be invited to become Activator Fellows. They will be offered training in entrepreneurial skills including grant writing, IP management and clinical validation, so their innovation can be ready for investment.
The Ecosystem Programme
The Ecosystem Programme is about creating a vibrant, UK-wide neurotechnology community where leaders from business, science, engineering, academia and the NHS can meet, spark ideas and form collaborations. This will involve quarterly events in Cambridge, road trip events across the UK and access to the thriving online Cambridge network, Connect: Health Tech.
“This unique partnership is all about turning radical ideas into practical, low-cost solutions that change lives,” said Kristin-Anne Rutter, Executive Director of Cambridge University Health Partners. “Cambridge is fielding its best team to make this work and using its networks to bring in the best people from all over the UK. From brilliant scientists to world-leading institutes, hospitals and business experts, everyone in this collaboration is committed to the ARIA partnership because, by working together, we all see an unprecedented opportunity to make a real difference in the world.”
“Physical and mental illnesses and diseases that affect the brain such as dementia are some of the biggest challenges we face both as individuals and as a society,” said Dr Ben Underwood, Associate Professor of Psychiatry at the University of Cambridge and Honorary Consultant Psychiatrist at Cambridgeshire and Peterborough NHS Foundation Trust. “This funding will bring together different experts doing things at the very limits of science and developing new technology to improve healthcare. We hope this new partnership with the NHS will lead to better care and treatment for people experiencing health conditions.”
Cambridge partners in the project include the Departments of Engineering and Psychiatry, Cambridge Neuroscience, the Milner Therapeutics Institute, the Maxwell Centre, Cambridge University Health Partners (CUHP), Cambridge Network, the Babraham Research Campus, Cambridgeshire and Peterborough NHS Foundation Trust, and Vellos.
A team from across the Cambridge life sciences, technology and business worlds has announced a multi-million-pound, three-year collaboration with the Advanced Research and Invention Agency (ARIA), the UK government’s new research funding agency.
In 2020, Hassabis and Jumper of Google DeepMind presented an AI model called AlphaFold2. With its help, they have been able to predict the structure of virtually all the 200 million proteins that researchers have identified.
Since their breakthrough, AlphaFold2 has been used by more than two million people from 190 countries. Among a myriad of scientific applications, researchers can now better understand antibiotic resistance and create images of enzymes that can decompose plastic.
The duo received the Nobel along with Professor David Baker of the University of Washington, who succeeded in using amino acids to design a new protein in 2003.
Sir Demis Hassabis read Computer Science as an undergraduate at Queens' College, Cambridge, matriculating in 1994. He went on to complete a PhD in cognitive neuroscience at University College London and create the videogame company Elixir Studios.
Hassabis co-founded DeepMind in 2010, a company that developed masterful AI models for popular boardgames. The company was sold to Google in 2014 and, two years later, DeepMind came to global attention when the company achieved what many then believed to be the holy grail of AI: beating the champion player of one of the world’s oldest boardgames, Go.
In 2014, Hassabis was elected as a Fellow Benefactor and, later, as an Honorary Fellow of Queens' College. In 2024, he was knighted by the King for services to artificial intelligence.
In 2018, the University announced the establishment of a DeepMind Chair of Machine Learning, thanks to a benefaction from Hassabis’s company, and appointed Professor Neil Lawrence to the position the following year.
“I have many happy memories from my time as an undergraduate at Cambridge, so it’s now a real honour for DeepMind to be able to contribute back to the Department of Computer Science and Technology and support others through their studies,” said Hassabis in 2018.
“It is wonderful to see Demis’s work recognised at the highest level — his contributions have been really transformative across many domains. I’m looking forward to seeing what he does next!” said Professor Alastair Beresford, Head of the Department of Computer Science and Technology and Robin Walker Fellow in Computer Science at Queens' College.
In a statement released by Google DeepMind following the announcement by the Nobel committee, Hassabis said: "I’ve dedicated my career to advancing AI because of its unparalleled potential to improve the lives of billions of people... I hope we'll look back on AlphaFold as the first proof point of AI's incredible potential to accelerate scientific discovery."
Dr John Jumper completed an MPhil in theoretical condensed matter physics at Cambridge's famous Cavendish Laboratory in 2008, during which time he was a member of St Edmund’s College, before going on to receive his PhD in Chemistry from the University of Chicago.
"Computational biology has long held tremendous promise for creating practical insights that could be put to use in real-world experiments," said Jumper, Director of Google DeepMind, in a statement released by the company. "AlphaFold delivered on this promise. Ahead of us are a universe of new insights and scientific discoveries made possible by the use of AI as a scientific tool."
“The whole of the St Edmund’s community joins me in congratulating our former Masters student Dr John Jumper on this illustrious achievement – the most inspiring example imaginable to our new generation of students as they go through their matriculation this week,” said St Edmund’s College Master, Professor Chris Young.
Professor Deborah Prentice, Vice-Chancellor of the University of Cambridge: “I’d like to congratulate Demis Hassabis and John Jumper, who, alongside Geoffrey Hinton yesterday, are all alumni of our University. Together, their pioneering work in the development and application of machine learning is transforming our understanding of the world around us. They join an illustrious line-up of Cambridge people to have received Nobel Prizes – now totalling 125 individuals – for which we can be very proud.”
Article updated on 10 October 2024 to reflect that the number of Cambridge people to have received Nobel Prizes now totals 125.
Two University alumni, Sir Demis Hassabis and Dr John Jumper, have been jointly awarded this year’s Nobel Prize in Chemistry for developing an AI model to solve a 50-year-old problem: predicting the complex structures of proteins.
I have many happy memories from my time as an undergraduate at Cambridge
ETH Zurich has once again been recognised as one of the world’s top universities in the latest Times Higher Education (THE) Rankings. The rankings also identify areas where the university has potential for further development.
A recent study published in Nature Geoscience provides groundbreaking insights into long-term changes in tropical weather patterns that are leading to an increased frequency of extreme weather events such as heatwaves and heavy rainfall in the Indo-Pacific. These changes are possibly driven by global warming, among other factors. The paper, titled “Indo-Pacific regional extremes aggravated by changes in tropical weather patterns”, employs a recently proposed methodology that characterises occurrence trends of weather patterns using atmospheric analogues, which are linked to the concept of recurrences in dynamical systems theory.
Unlike previous approaches, which have often focused on shifts in average behaviour, the method used in the study can identify occurrence trends for each daily weather pattern, thereby enabling a direct study of their association with extreme events — something that was previously unachievable. Thanks to this methodology, it was possible to identify the emergence of new large-scale atmospheric patterns, which are exacerbating regional weather extremes.
The study, led by doctoral student Chenyu Dong and Assistant Professor Gianmarco Mengaldo from the College of Design and Engineering (CDE) at the National University of Singapore (NUS), and a collaborative team of international scientists, uses advanced reanalysis datasets to analyse the tropical Indo-Pacific region’s evolving weather systems. The researchers found that since the 1990s, previously rare weather patterns have become more common, while some others that were once prominent have nearly disappeared. These changes are linked to shifts in the Pacific Walker Circulation, a key driver of tropical weather and climate, whose future changes remain highly uncertain in current climate models. Detecting long-term trends in the tropical Indo-Pacific has consistently been a challenge, especially on a daily time scale, due to the confluence of several modes of variability that tends to overshadow trend signals. This study is one of the first to investigate long-term changes in tropical weather patterns and their relationship with extreme events on a daily time scale.
“Critical changes in tropical weather patterns are significantly aggravating regional extremes, namely heatwaves and extreme precipitation, in the tropical Indo-Pacific region. Our study is one of the first to disentangle trend vs variability in the tropics, an aspect that has been historically challenging. We show that the changes identified cannot be fully explained by interannual modes of variability, and a possible culprit is anthropogenic global warming, though the influence of other factors may play a role. Further in-depth analyses are required to better inform climate modelling and climate adaptation strategies, especially in the tropical Indo-Pacific, where climate models still struggle to provide reliable projections. For Singapore, and other countries in Southeast Asia, improving climate projection capabilities and better understanding how tropical dynamics and regional extremes are evolving is of vital importance. This study is one step towards this direction,” said Asst Prof Mengaldo from the Department of Mechanical Engineering at CDE, NUS.
Key findings
Emerging weather patterns: New large-scale atmospheric configurations (or weather patterns) that were rare before the 1990s have emerged, while some others that were prominent have disappeared. These emerging weather patterns manifest as a stronger Pacific Walker circulation (or Walker cell) and are associated with wetter and warmer conditions in Southeast Asia and drier conditions in the equatorial Pacific. The emerging patterns cannot be explained by interannual modes of natural variability, such as the El Niño Southern Oscillation (ENSO), the Indian Ocean Dipole (IOD), the Pacific Decadal Oscillation (PDO), and the Atlantic Multidecadal Oscillation (AMO), but they are instead likely driven by long-term trends from the 1940s to the present. These trends and shifts of large-scale atmospheric dynamics in the tropical Indo-Pacific may be caused by global warming and other factors. Although these identified emerging patterns may be driven by the combined effect of different factors (excluding known modes of inter-annual variability), the implications for current and near-future climate are critical.
Considerable increase in weather extremes: The emerging weather patterns are strongly linked to increased regional weather extremes, namely heatwaves and extreme rainfall. In certain regions, these extremes are up to four times more frequent than climatology when associated with emerging weather patterns. For example, several regions, including parts of Indonesia, Singapore, South India, the Philippines and the western Pacific, exhibit markedly increased frequency of heatwaves compared with climatology. The South China Sea and its surrounding areas, including Vietnam and the Philippines, the Malay Peninsula, Singapore, the tip of South India and a portion of the Indian Ocean off the coast of Australia, exhibit considerably increased frequency of extreme rainfall. This increase in extreme weather is noteworthy, given that such changes are associated with long-term climate trends in a region that is highly vulnerable to weather extremes.
Significance of the study: These findings are significant in the context of climate change as they reveal that new and emerging weather patterns are contributing to increasingly severe weather in a region home to over a billion people, as well as unique and vulnerable ecosystems. The increased occurrences of heatwaves and extreme rainfall can lead to acute heat distress and flooding, respectively. With extreme weather events posing severe socio-economic and environmental challenges, understanding these changes is critical for improving climate models and informing future climate adaptation strategies.
This study was conducted by an international team of climate scientists from leading institutions, including NUS, Institut Pierre-Simon Laplace (IPSL), Uppsala University, Stockholm University, University of Cambridge, Columbia University, World Meteorological Organization (WMO), and the Centre for Climate Research Singapore (CCRS). The team is committed to advancing climate research to better understand the impacts of a changing climate on regional weather patterns and extremes.
“The emergence of new tropical weather patterns is a key signal of how anthropogenic climate change is altering atmospheric dynamics on a daily scale. Our findings show a significant increase in heatwaves and extreme precipitation in the Indo-Pacific, which may have profound consequences not only for the region but for global climate as well. This shift in weather patterns challenges our previous understanding of tropical variability and highlights the urgency to improve climate projections and preparedness for extreme events in vulnerable regions,” said Dr Davide Faranda, Research Director at the Laboratoire de Science du Climat et de l'Environnement (LSCE) of Institut Pierre-Simon Laplace (IPSL), French National Center for Scientific Research (CNRS).
“Heatwaves and extreme rainfall are two weather extremes that require careful and advance planning from policymakers to mitigate their effects. For instance, more frequent heatwaves may lead to high peaks in electricity demand with possible power outages, many heat-related illnesses that would need enough hospital beds, and crop failure that could threaten food security. More frequent extreme rainfall may lead to floods, which in turn are a direct threat to human life, buildings and infrastructure. Extreme rainfall may also lead to crop failure, contamination of drinkable water, and landslides. Southeast Asia is a relatively research-scarce region in terms of extreme weather, and further efforts are required to better prepare policymakers and local communities for a changing climate,” said Asst Prof Mengaldo.
University of Melbourne Chancellor, Jane Hansen AO, today announced the appointment of Professor Emma Johnston AO as the 21st Vice-Chancellor of the University of Melbourne.
NUS has come in at 17th place in the Times Higher Education (THE) World University Rankings 2025 after holding at 19 for two consecutive years. It maintains its 3rd position in Asia and is currently the top university in ASEAN.
Higher education has witnessed significant shifts and disruptions in recent years, opening doors for a reinvention of the approach to education and research. NUS’ focus on adapting curricula and pedagogy to match evolving industry demands has nurtured future-resilient, AI-ready and enterprising graduates, while a renewed emphasis on innovative, interdisciplinary research has contributed impactful solutions to the most pressing global challenges.
NUS President, Professor Tan Eng Chye said that NUS, which will be celebrating its 120th anniversary in 2025, continues on a strong and confident trajectory and remains focused in its mission to shape young minds and create positive outcomes for future generations.
“We are immensely proud that NUS has risen to 17th in the world, our highest position in THE World University Rankings. This achievement is the result of the relentless pursuit of excellence by the NUS community,” said Prof Tan.
THE World University Rankings 2025 is THE’s 21st edition of the rankings, growing from 200 universities to, now, more than 2,000 universities, making it the most global and inclusive university ranking in the world.
THE uses 18 carefully calibrated performance indicators in the areas of teaching, research environment, research quality, international outlook and industry, providing the most comprehensive and balanced comparisons, trusted by students, academics, university leaders, industry and governments.
Phil Baty, THE Chief of Global Affairs, said that Singapore’s status as a world-class hub for higher education, research and innovation talent is well and truly established, adding that the latest rankings a testament to strong support for universities and R&D as fundamental to the success of the nation and the centring of human talent as Singapore’s greatest resource.
“Congratulations to NUS on its highest ever ranking position. It is a remarkable achievement – making NUS a beacon of excellence not just for Singapore, but for Asia and the world,” said Mr Baty.
For 2025, University of Oxford holds first place while the Massachusetts Institute of Technology and Harvard University come in second and third place respectively in the rankings.
At an event at the School of Dental Medicine, members of the Penn community gathered to talk about the intersection of free speech and racism in academia.
Last month Earth welcomed a visitor known as 2024 PT5. To learn more about this celestial guest, Penn Today caught up with two astronomers in the School of Arts Sciences, Gary Bernstein and Bhuvnesh Jain.
Diving into the myths and legends behind sea monsters
Sampling of species from the “Sea Monsters” exhibition include a musky octopus, white shark jaw, and fangtooth fish.
Video and photos by Stephanie Mitchell/Harvard Staff Photographer
Bethany Carland-Adams
HMSC Communications
5 min read
New exhibit lets visitors discover sea creatures often more astonishing than the fantastical beings we may have imagined
The idea of sea monsters has captivated us for centuries. Could there really be something scary lurking in the dark depths? Folklore and popular culture say yes, yet science urges us to dive a little deeper.
“Sea Monsters: Wonders of Nature and Imagination” is a new exhibition at the Harvard Museum of Natural History, one of the four Harvard Museums of Science & Culture, which investigates the mystery and lore behind some of the ocean’s most fascinating creatures. It was inspired by a popular course taught by Peter Girguis, guest curator and Harvard professor of organic and evolutionary biology.
“This course is really a survey of humankind’s relationship with the ocean, from ancient mariners to current political affairs,” he said. “What’s been most rewarding is seeing the students realize how important the ocean is to humanity and how we often apply monstrosity to things we simply don’t understand.”
Sea monsters are a universal phenomenon, appearing in the myths and legends of cultures around the world. However, many of these exciting stories come from real creatures hidden in the deep. In the exhibition, visitors will ask themselves: Do sea monsters exist? And if so, what do they tell us about ourselves and our connection with the ocean?
Ancient mythology depicts the sea as a realm of chaos, filled with fearsome creatures like the Greek Hydra. Hindu mythology conjured the Makara, a sea monster that symbolizes protection and good fortune. In African folklore, creatures like the Mngwa and the Inkanyamba are feared as evil water spirits. However, with modern science and technologies, we better understand the lives of the real creatures behind these legends. For instance, the New England-based Scituate Sea Monster was ultimately identified as a decaying basking shark; the Kraken in Jules Verne’s “Twenty Thousand Leagues Under the Sea” is likely the enigmatic deep-sea giant squid.
Harvard Professor of Organismic and Evolutionary Biology Peter Girguis visits the exhibit “Sea Monsters: Wonders of Nature and Imagination” at the Harvard Museum of Natural History.
Visitors will discover the existence of sea creatures whose real lives are often more astonishing than the fantastical beings we may have imagined. Visitors can see these creatures firsthand when peering into displays of specimens from the Museum of Comparative Zoology’s extensive collections, such as a viperfish, the tentacle and beak of a giant squid, and a Megalodon shark tooth.
The exhibition features historical illustrations of these fabled monsters and detailed ancient mariners’ maps. Ancient maps held important cultural knowledge, often revealed through depictions of mythological creatures that served as warnings of dangerous and uncharted waters. Also on display is a Peruvian ceramic pot made by the ancient Moche people, which shows a crab with human-like features losing a battle with a god. A two-foot Gregorian reflecting telescope made around 1750, decorated with two sea serpents, also appears in the gallery.
Sloane’s viperfish, courtesy of the Museum of Comparative Zoology, Harvard University.
Tentacles and teeth are frequently associated with monsters of the sea. In reality, tentacles are important and adaptable tools used to sense the environment, catch food, and for protection. Surprisingly, the viperfish’s needle-like fangs feel more like toothbrush bristles than daggers. And while the deep-sea anglerfish may look scary, most are just a few inches long and only eat small fish and shrimp. Many of these creatures are captured in eerie and stunning deep-sea photography by Solvin Zankl and others.
The exhibition explores how ocean ecosystems are threatened by creatures such as the crown-of-thorns starfish which feeds on coral polyps and Sargassum seaweed which is growing out of control in some places due to agricultural-nutrient runoff, creating dead zones in the ocean and overwhelming beaches. Even more monstrous than these invaders are the pressures we humans place on the ocean. Plastic pollution and human-influenced climate change are endangering marine life and ecosystems. These are the real sea monsters and the exhibition shows that we have the chance to work toward sustainable solutions that protect our oceans for future generations.
The exhibition is open to the public through June 26, 2026.
Detail of Islandia (1595) Map of Iceland from a 1595 edition of Abraham Ortelius’ atlas, Theatrum Orbis Terrarum. Courtesy of the Harvard Map Collection, Harvard Library.
Girguis shares information on two illustrations with captions that read: “Gustave Dore’s illustration for Ludovico Ariosto’s epic poem ‘Orlando Furioso’ shows a knight fighting a monstrous sea creature” and “Victor Nehlig (1830-1909) was a French painter known for his dramatic and narrative scenes. This illustration shows a giant squid attacking a boat, depicting the dangerous encounter between humans and a sea monster, with the crew fighting for their lives.”
Young visitors from the community. Coral and Alexander Ain. watch a projection of the film “Jaws” within the exhibit.
Literary volumes of Homer’s “Odyssey” and Melville’ “Moby Dick” are on display.
Girguis speaks about a display of sea monster drawings.
A collection of maps used for navigation, including a map of Cape Cod (1926) created by Mélanie Elizabeth Leonard of Massachusetts, are on display. Courtesy of the Harvard Map Collection, Harvard Library.
A recent award from the U.S. Defense Advanced Research Projects Agency (DARPA) brings together researchers from Massachusetts Institute of Technology (MIT), Carnegie Mellon University (CMU), and Lehigh University (Lehigh) under the Multiobjective Engineering and Testing of Alloy Structures (METALS) program. The team will research novel design tools for the simultaneous optimization of shape and compositional gradients in multi-material structures that complement new high-throughput materials testing techniques, with particular attention paid to the bladed disk (blisk) geometry commonly found in turbomachinery (including jet and rocket engines) as an exemplary challenge problem.
“This project could have important implications across a wide range of aerospace technologies. Insights from this work may enable more reliable, reusable, rocket engines that will power the next generation of heavy-lift launch vehicles,” says Zachary Cordero, the Esther and Harold E. Edgerton Associate Professor in the MIT Department of Aeronautics and Astronautics (AeroAstro) and the project’s lead principal investigator. “This project merges classical mechanics analyses with cutting-edge generative AI design technologies to unlock the plastic reserve of compositionally graded alloys allowing safe operation in previously inaccessible conditions.”
Different locations in blisks require different thermomechanical properties and performance, such as resistance to creep, low cycle fatigue, high strength, etc. Large scale production also necessitates consideration of cost and sustainability metrics such as sourcing and recycling of alloys in the design.
“Currently, with standard manufacturing and design procedures, one must come up with a single magical material, composition, and processing parameters to meet ‘one part-one material’ constraints,” says Cordero. “Desired properties are also often mutually exclusive prompting inefficient design tradeoffs and compromises.”
Although a one-material approach may be optimal for a singular location in a component, it may leave other locations exposed to failure or may require a critical material to be carried throughout an entire part when it may only be needed in a specific location. With the rapid advancement of additive manufacturing processes that are enabling voxel-based composition and property control, the team sees unique opportunities for leap-ahead performance in structural components are now possible.
Cordero’s collaborators include Zoltan Spakovszky, the T. Wilson (1953) Professor in Aeronautics in AeroAstro; A. John Hart, the Class of 1922 Professor and head of the Department of Mechanical Engineering; Faez Ahmed, ABS Career Development Assistant Professor of mechanical engineering at MIT; S. Mohadeseh Taheri-Mousavi, assistant professor of materials science and engineering at CMU; and Natasha Vermaak, associate professor of mechanical engineering and mechanics at Lehigh.
The team’s expertise spans hybrid integrated computational material engineering and machine-learning-based material and process design, precision instrumentation, metrology, topology optimization, deep generative modeling, additive manufacturing, materials characterization, thermostructural analysis, and turbomachinery.
“It is especially rewarding to work with the graduate students and postdoctoral researchers collaborating on the METALS project, spanning from developing new computational approaches to building test rigs operating under extreme conditions,” says Hart. “It is a truly unique opportunity to build breakthrough capabilities that could underlie propulsion systems of the future, leveraging digital design and manufacturing technologies.”
This research is funded by DARPA under contract HR00112420303. The views, opinions, and/or findings expressed are those of the author and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. government and no official endorsement should be inferred.
A recent award from the U.S. Defense Advanced Research Projects Agency (DARPA) brings together researchers from Massachusetts Institute of Technology (MIT), Carnegie Mellon University (CMU), and Lehigh University (Lehigh) under the Multiobjective Engineering and Testing of Alloy Structures (METALS) program. The team will research novel design tools for the simultaneous optimization of shape and compositional gradients in multi-material structures that complement new high-throughput materials testing techniques, with particular attention paid to the bladed disk (blisk) geometry commonly found in turbomachinery (including jet and rocket engines) as an exemplary challenge problem.
“This project could have important implications across a wide range of aerospace technologies. Insights from this work may enable more reliable, reusable, rocket engines that will power the next generation of heavy-lift launch vehicles,” says Zachary Cordero, the Esther and Harold E. Edgerton Associate Professor in the MIT Department of Aeronautics and Astronautics (AeroAstro) and the project’s lead principal investigator. “This project merges classical mechanics analyses with cutting-edge generative AI design technologies to unlock the plastic reserve of compositionally graded alloys allowing safe operation in previously inaccessible conditions.”
Different locations in blisks require different thermomechanical properties and performance, such as resistance to creep, low cycle fatigue, high strength, etc. Large scale production also necessitates consideration of cost and sustainability metrics such as sourcing and recycling of alloys in the design.
“Currently, with standard manufacturing and design procedures, one must come up with a single magical material, composition, and processing parameters to meet ‘one part-one material’ constraints,” says Cordero. “Desired properties are also often mutually exclusive prompting inefficient design tradeoffs and compromises.”
Although a one-material approach may be optimal for a singular location in a component, it may leave other locations exposed to failure or may require a critical material to be carried throughout an entire part when it may only be needed in a specific location. With the rapid advancement of additive manufacturing processes that are enabling voxel-based composition and property control, the team sees unique opportunities for leap-ahead performance in structural components are now possible.
Cordero’s collaborators include Zoltan Spakovszky, the T. Wilson (1953) Professor in Aeronautics in AeroAstro; A. John Hart, the Class of 1922 Professor and head of the Department of Mechanical Engineering; Faez Ahmed, ABS Career Development Assistant Professor of mechanical engineering at MIT; S. Mohadeseh Taheri-Mousavi, assistant professor of materials science and engineering at CMU; and Natasha Vermaak, associate professor of mechanical engineering and mechanics at Lehigh.
The team’s expertise spans hybrid integrated computational material engineering and machine-learning-based material and process design, precision instrumentation, metrology, topology optimization, deep generative modeling, additive manufacturing, materials characterization, thermostructural analysis, and turbomachinery.
“It is especially rewarding to work with the graduate students and postdoctoral researchers collaborating on the METALS project, spanning from developing new computational approaches to building test rigs operating under extreme conditions,” says Hart. “It is a truly unique opportunity to build breakthrough capabilities that could underlie propulsion systems of the future, leveraging digital design and manufacturing technologies.”
This research is funded by DARPA under contract HR00112420303. The views, opinions, and/or findings expressed are those of the author and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. government and no official endorsement should be inferred.
MIT researchers have some good environmental news: Mercury emissions from human activity have been declining over the past two decades, despite global emissions inventories that indicate otherwise.
In a new study, the researchers analyzed measurements from all available monitoring stations in the Northern Hemisphere and found that atmospheric concentrations of mercury declined by about 10 percent between 2005 and 2020.
They used two separate modeling methods to determine what is driving that trend. Both techniques pointed to a decline in mercury emissions from human activity as the most likely cause.
Global inventories, on the other hand, have reported opposite trends. These inventories estimate atmospheric emissions using models that incorporate average emission rates of polluting activities and the scale of these activities worldwide.
“Our work shows that it is very important to learn from actual, on-the-ground data to try and improve our models and these emissions estimates. This is very relevant for policy because, if we are not able to accurately estimate past mercury emissions, how are we going to predict how mercury pollution will evolve in the future?” says Ari Feinberg, a former postdoc in the Institute for Data, Systems, and Society (IDSS) and lead author of the study.
The new results could help inform scientists who are embarking on a collaborative, global effort to evaluate pollution models and develop a more in-depth understanding of what drives global atmospheric concentrations of mercury.
However, due to a lack of data from global monitoring stations and limitations in the scientific understanding of mercury pollution, the researchers couldn’t pinpoint a definitive reason for the mismatch between the inventories and the recorded measurements.
“It seems like mercury emissions are moving in the right direction, and could continue to do so, which is heartening to see. But this was as far as we could get with mercury. We need to keep measuring and advancing the science,” adds co-author Noelle Selin, an MIT professor in the IDSS and the Department of Earth, Atmospheric and Planetary Sciences (EAPS).
Feinberg and Selin, his MIT postdoctoral advisor, are joined on the paper by an international team of researchers that contributed atmospheric mercury measurement data and statistical methods to the study. The research appears this week in the Proceedings of the National Academy of Sciences.
Mercury mismatch
The Minamata Convention is a global treaty that aims to cut human-caused emissions of mercury, a potent neurotoxin that enters the atmosphere from sources like coal-fired power plants and small-scale gold mining.
The treaty, which was signed in 2013 and went into force in 2017, is evaluated every five years. The first meeting of its conference of parties coincided with disheartening news reports that said global inventories of mercury emissions, compiled in part from information from national inventories, had increased despite international efforts to reduce them.
This was puzzling news for environmental scientists like Selin. Data from monitoring stations showed atmospheric mercury concentrations declining during the same period.
Bottom-up inventories combine emission factors, such as the amount of mercury that enters the atmosphere when coal mined in a certain region is burned, with estimates of pollution-causing activities, like how much of that coal is burned in power plants.
“The big question we wanted to answer was: What is actually happening to mercury in the atmosphere and what does that say about anthropogenic emissions over time?” Selin says.
Modeling mercury emissions is especially tricky. First, mercury is the only metal that is in liquid form at room temperature, so it has unique properties. Moreover, mercury that has been removed from the atmosphere by sinks like the ocean or land can be re-emitted later, making it hard to identify primary emission sources.
At the same time, mercury is more difficult to study in laboratory settings than many other air pollutants, especially due to its toxicity, so scientists have limited understanding of all chemical reactions mercury can undergo. There is also a much smaller network of mercury monitoring stations, compared to other polluting gases like methane and nitrous oxide.
“One of the challenges of our study was to come up with statistical methods that can address those data gaps, because available measurements come from different time periods and different measurement networks,” Feinberg says.
Multifaceted models
The researchers compiled data from 51 stations in the Northern Hemisphere. They used statistical techniques to aggregate data from nearby stations, which helped them overcome data gaps and evaluate regional trends.
By combining data from 11 regions, their analysis indicated that Northern Hemisphere atmospheric mercury concentrations declined by about 10 percent between 2005 and 2020.
Then the researchers used two modeling methods — biogeochemical box modeling and chemical transport modeling — to explore possible causes of that decline. Box modeling was used to run hundreds of thousands of simulations to evaluate a wide array of emission scenarios. Chemical transport modeling is more computationally expensive but enables researchers to assess the impacts of meteorology and spatial variations on trends in selected scenarios.
For instance, they tested one hypothesis that there may be an additional environmental sink that is removing more mercury from the atmosphere than previously thought. The models would indicate the feasibility of an unknown sink of that magnitude.
“As we went through each hypothesis systematically, we were pretty surprised that we could really point to declines in anthropogenic emissions as being the most likely cause,” Selin says.
Their work underscores the importance of long-term mercury monitoring stations, Feinberg adds. Many stations the researchers evaluated are no longer operational because of a lack of funding.
While their analysis couldn’t zero in on exactly why the emissions inventories didn’t match up with actual data, they have a few hypotheses.
One possibility is that global inventories are missing key information from certain countries. For instance, the researchers resolved some discrepancies when they used a more detailed regional inventory from China. But there was still a gap between observations and estimates.
They also suspect the discrepancy might be the result of changes in two large sources of mercury that are particularly uncertain: emissions from small-scale gold mining and mercury-containing products.
Small-scale gold mining involves using mercury to extract gold from soil and is often performed in remote parts of developing countries, making it hard to estimate. Yet small-scale gold mining contributes about 40 percent of human-made emissions.
In addition, it’s difficult to determine how long it takes the pollutant to be released into the atmosphere from discarded products like thermometers or scientific equipment.
“We’re not there yet where we can really pinpoint which source is responsible for this discrepancy,” Feinberg says.
In the future, researchers from multiple countries, including MIT, will collaborate to study and improve the models they use to estimate and evaluate emissions. This research will be influential in helping that project move the needle on monitoring mercury, he says.
This research was funded by the Swiss National Science Foundation, the U.S. National Science Foundation, and the U.S. Environmental Protection Agency.
“Our work shows that it is very important to learn from actual, on-the-ground data to try and improve our models and these emissions estimates,” says Ari Feinberg.
Harvard Powwow brings together Native students, family, friends
Nikki Rojas
Harvard Staff Writer
4 min read
The 26th annual Harvard Powwow was a family affair for renowned American Indian scholar Tink Tinker of Osage County and his great-niece Lena Tinker ’25, Osage Nation.
“I so appreciate Lena. I’ve watched her grow up,” he said. “We knew when you were little that you were smart. Now you’re almost a Harvard graduate.”
Lena smiled as she remembered her uncle supporting her decision to come to Harvard. “Now getting to have him here my senior year is very special,” she said. “It’s good to have family here.”
Despite the generational differences, the Tinkers were happy to connect with other Natives at the Sept. 28 gathering. “I like the element of thinking back on the powwow history that we have in this country as this place where we come together to gather in community,” Lena said about this year’s theme, “In My Powwow Era.”
Lena Tinker ’25 makes a ribbon skirt for the powwow.
Stephanie Mitchell/Harvard Staff Photographer
Tink, a professor emeritus at Iliff School of Theology, attended the powwow ahead of the first of four trips to campus as visiting Indigenous Spiritual Leader. The four-week residency, part of a collaboration between the Memorial Church and the Harvard University Native American Program (HUNAP), will see Tink working with students, collaborating with faculty and staff, and publicly presenting his work.
“It feels good,” he said of the powwow. “It feels like Indian Country. For me, coming from Denver and being Osage from Oklahoma, I’m hooking up with relatives here. I can hear their voices at the microphone and in the songs. I can see them in their dance steps.”
Following in her great uncle’s service-driven footsteps, Lena has been an active member of the Native community on campus. As a first-year, she joined Natives at Harvard College (NaHC) and now serves as co-president of the student group.
“My first memory of meeting Native people on campus was on the steps of Widener Library on a beautiful, sunny day,” she said. “I remember that first year wondering what the Native community was like here and slowly getting to meet everyone over the course of my four years here.
HUNAP hosted a community building event that included a friendship bracelet-making session.
Photo by Jodi Hilton
Catherine Dondero puts the finishing touches on her ribbon skirt.
Stephanie Mitchell/Harvard Staff Photographer
“For a lot of us, NaHC is home on this campus and those people are like family. It’s a space that feels special and different from anywhere else at the University,” she added.
While reflecting on their favorite memories of the student group, Lena and classmates brought up those who had graduated, affectionally called their NaHC elders. “We hold that memory in the stories we share with each other on campus,” she said.
Karen Medina-Perez ’24, who has ancestral connections to the Lambayeque and Caxamarca region in the Andes and Afro-Indigenous ancestry from Aroa, Yaracuy, is one such NaHC elder who returned for the powwow to reconnect with old friends and “celebrate our existence.”
In the days leading up to the powwow, HUNAP hosted several community-building events. Students gathered to work on regalia, specifically ribbon shirts and skirts, friendship bracelets, and even took lessons on social dancing from Kabl Wilkerson of the Citizen Potawatomi Nation, a doctoral candidate in the History Department.
For the elder Tinker, the power of powwows is their ability to bridge the gap among generations of Native people. “That’s the Indian world,” he said. “Far back as I can remember, the social event meant all generations were present, from the littlest to the oldest, and especially the in-betweens.”
“Harvard Powwow is a moment for our community to come together and celebrate,” said Jordan Clark of the Wampanoag Tribe of Aquinnah, acting executive director of HUNAP. “When we think of community that word is all-encompassing. It spans the University, the region, and generations.”
A photographer who makes historical subjects dance
Wendel White manifests the impetus behind his new monograph during Harvard talk
Nikki Rojas
Harvard Staff Writer
4 min read
“I am increasingly interested in the residual power of the past to inhabit material remains,” said Wendel White.
Photos by Niles Singer/Harvard Staff Photographer
For more than 30 years, photographer Wendel White has dedicated his craft to documenting Black lives throughout American history. In his latest project, “Manifest: Thirteen Colonies,” White turned his lens to African American materials held in collections throughout the 13 original U.S. colonies.
Among White’s 235 subjects are hair clippings from Frederick Douglass and Harriet Beecher Stowe, a Civil War-era brogan-style shoe, as well as photographs, diaries, and documents.
He came across the Civil War-era leather and wood-soled shoe at the North Carolina Museum of History in Raleigh. While the shoe does not have a known connection with Black or white individuals, White said it made him realize how his work is not “really just about Black life, but how Black life is defined by virtual whiteness.” The collection in North Carolina held particular significance for White, who shared that his great-grandfather escaped enslavement and joined the Union Army in North Carolina.
Brenda Tindal, chief campus curator, contributed an essay to White’s book.
In an intimate conversation with curators and scholars of Black history and visual arts on Sept. 26, White launched his new monograph, “Wendel A. White: Manifest | Thirteen Colonies,” which accompanies his exhibition at the Peabody Museum of Archaeology & Ethnology.
“I am increasingly interested in the residual power of the past to inhabit material remains,” White said. “The ability of objects to transcend the moment suggests a remarkable mechanism for golden time, bringing the past and the present into a shared space.
“These artifacts are the forensic evidence of Black life and events in the United States,” he continued. “The photographs form a reliquary and a survey of the impulse and motivation to preserve history and memory.”
Faculty of Arts and Sciences chief campus curator Brenda Tindal, who moderated the conversation at the Geological Lecture Hall and contributed an essay to White’s book, kicked off the hourlong discussion by crediting the photographer for helping to inform and visualize Black history and culture.
“I’ve been sitting with this work for several months, and in some ways, this tome has become a bit of a prism through which the identity and the quotidian contours of Black life and culture come into such sharp relief,” Tindal said.
White and Tindal, who was recently appointed co-chair of the memorial committee for the Harvard & Legacy of Slavery Initiative, were joined by fellow contributors Cheryl Finley, Atlanta University Center Art History + Curatorial Studies Collective; Leigh Raiford, University of California, Berkeley; and Deborah Willis, Tisch School of the Arts, New York University.
While the conversation largely focused on White’s latest project, audiences also learned about his process and attention to detail. Raiford said she was struck by “the way the objects are made to dance” in White’s photography and “the way that they’re held in this warm light that reminds us to have a certain kind of reference for the past and for history.” It was a sentiment echoed by fellow panelists, who praised White for his ability to help viewers reimagine the lives of Black people in the past.
“There seems to be this really sort of subjective narration of Black culture and life, and it’s often through this very narrow contour from slavery to segregation to civil rights. That narrative really situates Black life and culture within this really over-determined domain of struggle,” Tindal said, before asking the panel how archives of African American material culture in public collections help transform the historical perception and understanding of Black history.
Willis pointed to how White frames objects in his project to help uncover stories about love, protection, and respect, and thus move away from the narrow view of Black history.
Special attention was also given to White’s artistic choice to blur part of his photographs. While many of the objects photographed for the project sharply contrast to the black velvet they are placed on, certain areas of each item are blurred.
“One of the things that was helpful for me is that I felt like the blur held back some of the violence of the archives,” said Tracey Hucks, Victor S. Thomas Professor of Africana Religious Studies at Harvard Divinity School and Suzanne Young Murray Professor at the Radcliffe Institute for Advanced Study, from the audience.
White began this project in 2021, after being named a Robert Gardner Fellow in Photography. The Peabody grants the annual fellowship to a photographer to document the human condition around the world. Forty-six images of White’s latest work are on display at the Peabody in an exhibition through April 13, 2025.
Industrial electrochemical processes that use electrodes to produce fuels and chemical products are hampered by the formation of bubbles that block parts of the electrode surface, reducing the area available for the active reaction. Such blockage reduces the performance of the electrodes by anywhere from 10 to 25 percent.
But new research reveals a decades-long misunderstanding about the extent of that interference. The findings show exactly how the blocking effect works and could lead to new ways of designing electrode surfaces to minimize inefficiencies in these widely used electrochemical processes.
It has long been assumed that the entire area of the electrode shadowed by each bubble would be effectively inactivated. But it turns out that a much smaller area — roughly the area where the bubble actually contacts the surface — is blocked from its electrochemical activity. The new insights could lead directly to new ways of patterning the surfaces to minimize the contact area and improve overall efficiency.
The findings are reported today in the journal Nanoscale, in a paper by recent MIT graduate Jack Lake PhD ’23, graduate student Simon Rufer, professor of mechanical engineering Kripa Varanasi, research scientist Ben Blaiszik, and six others at the University of Chicago and Argonne National Laboratory. The team has made available an open-source, AI-based software tool that engineers and scientists can now use to automatically recognize and quantify bubbles formed on a given surface, as a first step toward controlling the electrode material’s properties.
Gas-evolving electrodes, often with catalytic surfaces that promote chemical reactions, are used in a wide variety of processes, including the production of “green” hydrogen without the use of fossil fuels, carbon-capture processes that can reduce greenhouse gas emissions, aluminum production, and the chlor-alkali process that is used to make widely used chemical products.
These are very widespread processes. The chlor-alkali process alone accounts for 2 percent of all U.S. electricity usage; aluminum production accounts for 3 percent of global electricity; and both carbon capture and hydrogen production are likely to grow rapidly in coming years as the world strives to meet greenhouse-gas reduction targets. So, the new findings could make a real difference, Varanasi says.
“Our work demonstrates that engineering the contact and growth of bubbles on electrodes can have dramatic effects” on how bubbles form and how they leave the surface, he says. “The knowledge that the area under bubbles can be significantly active ushers in a new set of design rules for high-performance electrodes to avoid the deleterious effects of bubbles.”
“The broader literature built over the last couple of decades has suggested that not only that small area of contact but the entire area under the bubble is passivated,” Rufer says. The new study reveals “a significant difference between the two models because it changes how you would develop and design an electrode to minimize these losses.”
To test and demonstrate the implications of this effect, the team produced different versions of electrode surfaces with patterns of dots that nucleated and trapped bubbles at different sizes and spacings. They were able to show that surfaces with widely spaced dots promoted large bubble sizes but only tiny areas of surface contact, which helped to make clear the difference between the expected and actual effects of bubble coverage.
Developing the software to detect and quantify bubble formation was necessary for the team’s analysis, Rufer explains. “We wanted to collect a lot of data and look at a lot of different electrodes and different reactions and different bubbles, and they all look slightly different,” he says. Creating a program that could deal with different materials and different lighting and reliably identify and track the bubbles was a tricky process, and machine learning was key to making it work, he says.
Using that tool, he says, they were able to collect “really significant amounts of data about the bubbles on a surface, where they are, how big they are, how fast they’re growing, all these different things.” The tool is now freely available for anyone to use via the GitHub repository.
By using that tool to correlate the visual measures of bubble formation and evolution with electrical measurements of the electrode’s performance, the researchers were able to disprove the accepted theory and to show that only the area of direct contact is affected. Videos further proved the point, revealing new bubbles actively evolving directly under parts of a larger bubble.
The researchers developed a very general methodology that can be applied to characterize and understand the impact of bubbles on any electrode or catalyst surface. They were able to quantify the bubble passivation effects in a new performance metric they call BECSA (Bubble-induced electrochemically active surface), as opposed to ECSA (electrochemically active surface area), that is used in the field. “The BECSA metric was a concept we defined in an earlier study but did not have an effective method to estimate until this work,” says Varanasi.
The knowledge that the area under bubbles can be significantly active ushers in a new set of design rules for high-performance electrodes. This means that electrode designers should seek to minimize bubble contact area rather than simply bubble coverage, which can be achieved by controlling the morphology and chemistry of the electrodes. Surfaces engineered to control bubbles can not only improve the overall efficiency of the processes and thus reduce energy use, they can also save on upfront materials costs. Many of these gas-evolving electrodes are coated with catalysts made of expensive metals like platinum or iridium, and the findings from this work can be used to engineer electrodes to reduce material wasted by reaction-blocking bubbles.
Varanasi says that “the insights from this work could inspire new electrode architectures that not only reduce the usage of precious materials, but also improve the overall electrolyzer performance,” both of which would provide large-scale environmental benefits.
The research team included Jim James, Nathan Pruyne, Aristana Scourtas, Marcus Schwarting, Aadit Ambalkar, Ian Foster, and Ben Blaiszik at the University of Chicago and Argonne National Laboratory. The work was supported by the U.S. Department of Energy under the ARPA-E program. This work made use of the MIT.nano facilities.
The award funds innovative but inherently risky research endeavors that have the potential to overturn existing scientific paradigms or create new ones.
Weill Cornell Medicine researchers have found that removing protected class regulation from Medicare prescription drug policies could greatly reduce the United States' prescription drug spending, potentially saving $47 billion between 2011 and 2019.
Hopfield, the Howard A. Prior Professor in the Life Sciences, Emeritus, and professor of molecular biology, emeritus, shares the 2024 Nobel Prize with Toronto's Geoffrey E. Hinton.
Hinton (King’s 1967) and Hopfield were awarded the prize ‘for foundational discoveries and inventions that enable machine learning with artificial neural networks.’ Hinton, who is known as the ‘Godfather of AI’ is Emeritus Professor of Computer Science at the University of Toronto.
This year’s two Nobel Laureates in Physics have used tools from physics to develop methods that are the foundation of today’s powerful machine learning. John Hopfield, a Guggenheim Fellow at the University of Cambridge in 1968-1969, created an associative memory that can store and reconstruct images and other types of patterns in data. Geoffrey Hinton invented a method that can autonomously find properties in data, and perform tasks such as identifying specific elements in pictures.
When we talk about artificial intelligence, we often mean machine learning using artificial neural networks. This technology was originally inspired by the structure of the brain. In an artificial neural network, the brain’s neurons are represented by nodes that have different values. These nodes influence each other through connections that can be likened to synapses and which can be made stronger or weaker. The network is trained, for example by developing stronger connections between nodes with simultaneously high values. This year’s laureates have conducted important work with artificial neural networks from the 1980s onward.
Geoffrey Hinton used a network invented by John Hopfield as the foundation for a new network: the Boltzmann machine. This can learn to recognise characteristic elements in a given type of data. Hinton used tools from statistical physics, the science of systems built from many similar components. The machine is trained by feeding it examples that are very likely to arise when the machine is run. The Boltzmann machine can be used to classify images or create new examples of the type of pattern on which it was trained. Hinton has built upon this work, helping initiate the current explosive development of machine learning.
Vice-Chancellor Professor Deborah Prentice said:
“Many congratulations to Professor Hinton on receiving the Nobel Prize. Our alumni are a vital part of the Cambridge community, and many of them, like Professor Hinton, have made discoveries and advances that have genuinely changed our world. On behalf of the University of Cambridge, I congratulate him on this enormous accomplishment.”
“The laureates’ work has already been of the greatest benefit. In physics we use artificial neural networks in a vast range of areas, such as developing new materials with specific properties,” says Ellen Moons, Chair of the Nobel Committee for Physics. Hinton and Hopfield are the 122nd and 123rd Members of the University of Cambridge to be awarded the Nobel Prize.
From 1980 to 1982, Hinton was a Scientific Officer at the MRC Applied Psychology Unit (as the MRC Cognition and Brain Sciences Unit was then known), before taking up a position at Carnegie Mellon University in Pittsburgh.
In May 2023, Hinton gave a public lecture at the University's Centre for the Study of Existential Risk entitled 'Two Paths to Intelligence', in which he argued that "large scale digital computation is probably far better at acquiring knowledge than biological computation and may soon be much more intelligent than us".
Geoffrey Hinton, an alumnus of the University of Cambridge, was awarded the 2024 Nobel Prize in Physics, jointly with John Hopfield of Princeton University.
Associate Professor Sajikumar Sreedharan from the Department of Physiology at the NUS Yong Loo Lin School of Medicine has received the International Association for the Study of Neurons and Brain Diseases (AND) Investigator Award in recognition for his research over the past two decades on how long-term memories are systematically stored in the brain.
AND is an organisation of neuroscientists based at the University of Toronto, Canada. Assoc Prof Sajikumar was presented with the Investigator Award at the association’s annual meeting, which brought together scientists from around the world who study memory formation and its degeneration to share and discuss their latest research. The meeting was held in Qingdao, China, from 9 to 11 September 2024.
Assoc Prof Sajikumar, who conducts research on molecular mechanisms of memory, was selected by a scientific committee of world-renowned neuroscientists, for significantly advancing the understanding of the neuroscience behind how short-term memories systematically transition into long-term memories. The committee also commended that “his work provides a deeper insight into memory impairments due to ageing, learning disabilities, and mental illnesses.”
MIT engineers have built a new desalination system that runs with the rhythms of the sun.
The solar-powered system removes salt from water at a pace that closely follows changes in solar energy. As sunlight increases through the day, the system ramps up its desalting process and automatically adjusts to any sudden variation in sunlight, for example by dialing down in response to a passing cloud or revving up as the skies clear.
Because the system can quickly react to subtle changes in sunlight, it maximizes the utility of solar energy, producing large quantities of clean water despite variations in sunlight throughout the day. In contrast to other solar-driven desalination designs, the MIT system requires no extra batteries for energy storage, nor a supplemental power supply, such as from the grid.
The engineers tested a community-scale prototype on groundwater wells in New Mexico over six months, working in variable weather conditions and water types. The system harnessed on average over 94 percent of the electrical energy generated from the system’s solar panels to produce up to 5,000 liters of water per day despite large swings in weather and available sunlight.
“Conventional desalination technologies require steady power and need battery storage to smooth out a variable power source like solar. By continually varying power consumption in sync with the sun, our technology directly and efficiently uses solar power to make water,” says Amos Winter, the Germeshausen Professor of Mechanical Engineering and director of the K. Lisa Yang Global Engineering and Research (GEAR) Center at MIT. “Being able to make drinking water with renewables, without requiring battery storage, is a massive grand challenge. And we’ve done it.”
The system is geared toward desalinating brackish groundwater — a salty source of water that is found in underground reservoirs and is more prevalent than fresh groundwater resources. The researchers see brackish groundwater as a huge untapped source of potential drinking water, particularly as reserves of fresh water are stressed in parts of the world. They envision that the new renewable, battery-free system could provide much-needed drinking water at low costs, especially for inland communities where access to seawater and grid power are limited.
“The majority of the population actually lives far enough from the coast, that seawater desalination could never reach them. They consequently rely heavily on groundwater, especially in remote, low-income regions. And unfortunately, this groundwater is becoming more and more saline due to climate change,” says Jonathan Bessette, MIT PhD student in mechanical engineering. “This technology could bring sustainable, affordable clean water to underreached places around the world.”
The researchers report details the new system in a paper appearing today in Nature Water. The study’s co-authors are Bessette, Winter, and staff engineer Shane Pratt.
Pump and flow
The new system builds on a previous design, which Winter and his colleagues, including former MIT postdoc Wei He, reported earlier this year. That system aimed to desalinate water through “flexible batch electrodialysis.”
Electrodialysis and reverse osmosis are two of the main methods used to desalinate brackish groundwater. With reverse osmosis, pressure is used to pump salty water through a membrane and filter out salts. Electrodialysis uses an electric field to draw out salt ions as water is pumped through a stack of ion-exchange membranes.
Scientists have looked to power both methods with renewable sources. But this has been especially challenging for reverse osmosis systems, which traditionally run at a steady power level that’s incompatible with naturally variable energy sources such as the sun.
Winter, He, and their colleagues focused on electrodialysis, seeking ways to make a more flexible, “time-variant” system that would be responsive to variations in renewable, solar power.
In their previous design, the team built an electrodialysis system consisting of water pumps, an ion-exchange membrane stack, and a solar panel array. The innovation in this system was a model-based control system that used sensor readings from every part of the system to predict the optimal rate at which to pump water through the stack and the voltage that should be applied to the stack to maximize the amount of salt drawn out of the water.
When the team tested this system in the field, it was able to vary its water production with the sun’s natural variations. On average, the system directly used 77 percent of the available electrical energy produced by the solar panels, which the team estimated was 91 percent more than traditionally designed solar-powered electrodialysis systems.
Still, the researchers felt they could do better.
“We could only calculate every three minutes, and in that time, a cloud could literally come by and block the sun,” Winter says. “The system could be saying, ‘I need to run at this high power.’ But some of that power has suddenly dropped because there’s now less sunlight. So, we had to make up that power with extra batteries.”
Solar commands
In their latest work, the researchers looked to eliminate the need for batteries, by shaving the system’s response time to a fraction of a second. The new system is able to update its desalination rate, three to five times per second. The faster response time enables the system to adjust to changes in sunlight throughout the day, without having to make up any lag in power with additional power supplies.
The key to the nimbler desalting is a simpler control strategy, devised by Bessette and Pratt. The new strategy is one of “flow-commanded current control,” in which the system first senses the amount of solar power that is being produced by the system’s solar panels. If the panels are generating more power than the system is using, the controller automatically “commands” the system to dial up its pumping, pushing more water through the electrodialysis stacks. Simultaneously, the system diverts some of the additional solar power by increasing the electrical current delivered to the stack, to drive more salt out of the faster-flowing water.
“Let’s say the sun is rising every few seconds,” Winter explains. “So, three times a second, we’re looking at the solar panels and saying, ‘Oh, we have more power — let’s bump up our flow rate and current a little bit.’ When we look again and see there’s still more excess power, we’ll up it again. As we do that, we’re able to closely match our consumed power with available solar power really accurately, throughout the day. And the quicker we loop this, the less battery buffering we need.”
The engineers incorporated the new control strategy into a fully automated system that they sized to desalinate brackish groundwater at a daily volume that would be enough to supply a small community of about 3,000 people. They operated the system for six months on several wells at the Brackish Groundwater National Desalination Research Facility in Alamogordo, New Mexico. Throughout the trial, the prototype operated under a wide range of solar conditions, harnessing over 94 percent of the solar panel’s electrical energy, on average, to directly power desalination.
“Compared to how you would traditionally design a solar desal system, we cut our required battery capacity by almost 100 percent,” Winter says.
The engineers plan to further test and scale up the system in hopes of supplying larger communities, and even whole municipalities, with low-cost, fully sun-driven drinking water.
“While this is a major step forward, we’re still working diligently to continue developing lower cost, more sustainable desalination methods,” Bessette says.
“Our focus now is on testing, maximizing reliability, and building out a product line that can provide desalinated water using renewables to multiple markets around the world," Pratt adds.
The team will be launching a company based on their technology in the coming months.
This research was supported in part by the National Science Foundation, the Julia Burke Foundation, and the MIT Morningside Academy of Design. This work was additionally supported in-kind by Veolia Water Technologies and Solutions and Xylem Goulds.
Jon Bessette sits atop a trailer housing the electrodialysis desalination system at the Brackish Groundwater National Desalination Research Facility (BGNDRF) in Alamogordo, New Mexico. The system is connected to real groundwater, water tanks, and solar panels.
More than 2,000 undergraduates and recent alumni from the NUS College of Humanities and Sciences (CHS) visited the inaugural NUS CHS Career and Internship Fair held at the NUS University Town’s Stephen Riady Centre to connect with companies, explore job opportunities and receive on-the-spot advice on career skills such as networking and tackling interviews.
Organised by CHS and the NUS Centre for Future-ready Graduates (CFG) in September, the event marked a key milestone for the College as it prepares for the graduation of the pioneering CHS cohort in 2025, said Professor Sun Yeneng, Co-Dean of CHS and Dean of NUS Faculty of Science at the launch of the event.
Remarking on the strong turnout from employers and students, he noted that companies recognise the value of engaging with CHS students, who are well-equipped with interdisciplinary skills honed through the College’s unique combination of a humanities and science education.
“Employers have shared how both breadth and depth of skills and knowledge enable their hires to better collaborate across different functions, domains and geographies,” said Prof Sun.
“Others attest to the importance of being able to connect the dots in new and unusual ways, or even uncover new dots – an important attribute to help business formulate more holistic solutions to the complex challenges they face,” he added.
Highlights of the fair
The fair saw participation from 64 companies, representing a wide spectrum of industries, from technology and finance to education, healthcare, and government agencies. Over 256 company representatives offered students insights into their respective industries, organisational cultures, and internship or full-time employment opportunities.
The event also featured a Career Access Networking session facilitated by the CFG Career Access Team that was specifically designed for students with special accessibility and educational needs. This initiative provided a more intimate and supportive environment for these students to network with eight inclusive employers and gain access to opportunities focused on Diversity, Equity and Inclusion (DEI). In all, more than 90 conversations took place during the session with both employers and students giving the event a thumbs-up.
Prior to the fair, students had the opportunity to attend workshops organised by CHS and CFG on resume writing, interview skills and career networking. These sessions offered lessons on ways to leverage generative AI tools like ChatGPT to craft compelling resumes, optimise resumes for AI-powered applicant tracking systems commonly used by employers, harness platforms like LinkedIn to boost their chances of success at job search and application, and receive personalised feedback through one-on-one reviews with career advisors.
Rene Mah, a third-year CHS student who attended the fair said, “We may have a preconceived idea of what kinds of careers we want to go into (and) which industries we want to develop our careers in. It’s good to be able to come down and see for yourself what is available out there as you might find something new.”
The event proved equally valuable for the participating employers. Ms Li Sihong, an Early Careers Recruitment Specialist from biopharmaceutical firm GSK, said, “We were able to network with students face-to-face to share in greater detail what each job is about and help them navigate their options.”
The success of this year’s fair underscores the commitment of CHS to continuously enhance students’ career readiness, ensuring they are well-prepared for the demands of the modern workforce. Associate Professor Nicholas Hon, Vice Dean (External Relations and Student Life) at the NUS Faculty of Arts and Social Sciences said that the value of an interdisciplinary CHS education cannot be overemphasised and as the career landscape evolves, events like the CHS Career & Internship Fair serve as a vital bridge between education and industry.
Assoc Prof Hon added, “All employers have one very simple objective, and that is they would like to hire the most capable and most competent people that they can find. NUS has a complementary goal. We want to offer the education that will produce people who are competent, capable and are highly competitive in the job market.”
Mariam Issoufou is one of Africa’s most sought-after architects. She has held the position of Professor of Architecture Heritage and Sustainability at ETH Zurich since 2022.
Using ultra-high-resolution scanners that can see the living brain in fine detail, researchers from the Universities of Cambridge and Oxford were able to observe the damaging effects Covid-19 can have on the brain.
The study team scanned the brains of 30 people who had been admitted to hospital with severe Covid-19 early in the pandemic, before vaccines were available. The researchers found that Covid-19 infection damages the region of the brainstem associated with breathlessness, fatigue and anxiety.
The powerful MRI scanners used for the study, known as 7-Tesla or 7T scanners, can measure inflammation in the brain. Their results, published in the journal Brain, will help scientists and clinicians understand the long-term effects of Covid-19 on the brain and the rest of the body. Although the study was started before the long-term effects of Covid were recognised, it will help to better understand this condition.
The brainstem, which connects the brain to the spinal cord, is the control centre for many basic life functions and reflexes. Clusters of nerve cells in the brainstem, known as nuclei, regulate and process essential bodily functions such as breathing, heart rate, pain and blood pressure.
“Things happening in and around the brainstem are vital for quality of life, but it had been impossible to scan the inflammation of the brainstem nuclei in living people, because of their tiny size and difficult position.” said first author Dr Catarina Rua, from the Department of Clinical Neurosciences. “Usually, scientists only get a good look at the brainstem during post-mortem examinations.”
“The brainstem is the critical junction box between our conscious selves and what is happening in our bodies,” said Professor James Rowe, also from the Department of Clinical Neurosciences, who co-led the research. “The ability to see and understand how the brainstem changes in response to Covid-19 will help explain and treat the long-term effects more effectively.”
In the early days of the Covid-19 pandemic, before effective vaccines were available, post-mortem studies of patients who had died from severe Covid-19 infections showed changes in their brainstems, including inflammation. Many of these changes were thought to result from a post-infection immune response, rather than direct virus invasion of the brain.
“People who were very sick early in the pandemic showed long-lasting brain changes, likely caused by an immune response to the virus. But measuring that immune response is difficult in living people,” said Rowe. “Normal hospital-type MRI scanners can’t see inside the brain with the kind of chemical and physical detail we need.”
“But with 7T scanners, we can now measure these details. The active immune cells interfere with the ultra-high magnetic field, so that we’re able to detect how they are behaving,” said Rua. “Cambridge was special because we were able to scan even the sickest and infectious patients, early in the pandemic.”
Many of the patients admitted to hospital early in the pandemic reported fatigue, breathlessness and chest pain as troubling long-lasting symptoms. The researchers hypothesised these symptoms were in part the result of damage to key brainstem nuclei, damage which persists long after Covid-19 infection has passed.
The researchers saw that multiple regions of the brainstem, in particular the medulla oblongata, pons and midbrain, showed abnormalities consistent with a neuroinflammatory response. The abnormalities appeared several weeks after hospital admission, and in regions of the brain responsible for controlling breathing.
“The fact that we see abnormalities in the parts of the brain associated with breathing strongly suggests that long-lasting symptoms are an effect of inflammation in the brainstem following Covid-19 infection,” said Rua. “These effects are over and above the effects of age and gender, and are more pronounced in those who had had severe Covid-19.”
In addition to the physical effects of Covid-19, the 7T scanners provided evidence of some of the psychiatric effects of the disease. The brainstem monitors breathlessness, as well as fatigue and anxiety. “Mental health is intimately connected to brain health, and patients with the most marked immune response also showed higher levels of depression and anxiety,” said Rowe. “Changes in the brainstem caused by Covid-19 infection could also lead to poor mental health outcomes, because of the tight connection between physical and mental health.”
The researchers say the results could aid in the understanding of other conditions associated with inflammation of the brainstem, like MS and dementia. The 7T scanners could also be used to monitor the effectiveness of different treatments for brain diseases.
“This was an incredible collaboration, right at the peak of the pandemic, when testing was very difficult, and I was amazed how well the 7T scanners worked,” said Rua. “I was really impressed with how, in the heat of the moment, the collaboration between lots of different researchers came together so effectively.”
The research was supported in part by the NIHR Cambridge Biomedical Research Centre, the NIHR Oxford Biomedical Research Centre, and the University of Oxford COVID Medical Sciences Division Rapid Response Fund.
Damage to the brainstem – the brain’s ‘control centre’ – is behind long-lasting physical and psychiatric effects of severe Covid-19 infection, a study suggests.
Harvard community members, some holding signs calling for the release of hostages held by Hamas, gathered on the steps of Widener Library Monday evening during a memorial vigil co-hosted by Harvard Chabad and Harvard Hillel. The event was held to “honor those lost and stand together in solidarity, unity, and hope” on the one-year anniversary of the terrorist attack by Hamas on Israel on Oct. 7, 2023. Speakers offered prayers and shared stories about friends who perished or were taken hostage. Some sang songs in memory of lost loved ones. The names of all the hostages were read.
Participants called for the release of hostages held by Hamas.
Harvard Professor Eric Nelson speaks to the crowd.
Some in the crowd shared personal stories of friends lost in the attack.
Harvard President Alan Garber was among those attending the vigil.
Kennedy School scholars examine spread of Gaza war to include Hezbollah, Iran
A year ago, a terrorist attack on Israel by Hamas sparked the war in Gaza, which has claimed tens of thousands of lives and recently begun to spread to Lebanon and Iran. What happens next?
Scholars at the Harvard Kennedy School came together Monday to discuss the risks of further escalation in the Middle East in a panel led by Meghan O’Sullivan, director of the Belfer Center for Science and International Affairs at HKS.
“It is a day of remembrance for many who lost loved ones a year ago today in the terrorist attacks by Hamas and those who lost loved ones in the many, many deaths that have occurred since that time,” said O’Sullivan, who is also Jeane Kirkpatrick Professor of the Practice of International Affairs. “It’s a day of mourning for many people who are still losing members of their families and loved ones.”
A major point of discussion revolved around the role of Iran, a longtime opponent of Israel, in the ongoing conflict. Last week, Iran launched a major ballistic missile attack on Israel — only the second time the country has been directly attacked by Iran. Iranian officials said the action was in retaliation for the killing of a Hamas leader by Israel in Tehran in July. Israel has not claimed responsibility for the death.
The missile attack was widely viewed as an escalation between Israel and Iran, which has long actively supported Hamas in Gaza, Hezbollah in Lebanon, and smaller militant groups in the West Bank in conflicts with Israel.
“We’re facing a Middle Eastern crisis,” said O’Sullivan, who served in the George W. Bush administration as deputy national security adviser on Iraq and Afghanistan. “We have moved away from a decade-long war between Israel and Iran by proxy to a place where now Israel and Iran are in conflict with each other directly.”
The panel featured Edward Djerejian, a former U.S. ambassador to Israel and current a senior fellow at Harvard’s Middle East Institute; Gidi Grinstein, an Israeli entrepreneur and former peace negotiator under Prime Minister Ehud Barak; Karim Sadjadpour, an Iranian-American policy analyst at the Carnegie Endowment for International Peace; and Omar H. Rahman, senior fellow with the Middle East Council on Global Affairs.
For more than an hour, the panelists discussed Israel’s strategy as the dominant military power in the region, Iran’s involvement, and what America’s role is — and may become.
“What [Israel] has done in terms of degrading Hamas’ capabilities and decapitating Hezbollah’s leadership, these, in my eyes, are brilliant tactical victories,” Djerejian said. “But what about the day after?”
Israeli Prime Minister Benjamin Netanyahu has said his goals are to demilitarize and de-radicalize the opposition in Gaza, but his actions don’t align with those aims, Djerejian said.
“He’s defined the Israeli military movements in Lebanon to change the balance of power on the northern border of Israel. But these are not a strategy. These are mostly tactics without resolving the key issues.”
Rahman was tougher in his assessment of Israel’s actions.
“You’re creating a bottomless pit of despair, trauma, anguish, anger, all the things that will feed the resistance for generations,” he said of the situation in Gaza. “And so Hamas is not going anywhere as an organization. Hezbollah is not going anywhere as an organization.”
He argued that the U.S., which has provided almost $18 billion in aid to Israel in the last year, must reconsider its role in the conflict.
“Does America want to continue underwriting an indefinite Israeli war on the region? Is that something we want to do with our taxpayer money and our support at the cost of our national interest, our credibility, on the international stage?” he said.
Grinstein said that the conflict could end immediately if Hamas were to surrender.
“I do want to say and acknowledge that our tragedy creates a challenge of compassion,” he said. “To look at the other side and feel their tragedy as well, because there is an unimaginable tragedy happening in Gaza. Here could be an easy solution for this war, which is for Hamas leadership to leave Gaza and end the war.”
As for the likelihood of Iran relenting, Sadjadpour said that it’s unlikely.
“I don’t think we’re ever going to see meaningful peace and stability in the entire Middle East until you have a government in Iran, I won’t say democratic, but whose organizing principle is not the revolutionary ideology of 1979 but the national interest of Iran.”
For more information, including a transcript of the event, visit the Belfer Center website.
Penn’s Division of Public Safety’s PennReady: Protecting Communities Through Resilience and Relationships Health and Safety Fair on Sept. 27 featured a controlled burn of a mock residential room, showcasing the efficacy of sprinkler and alarm systems, and the response of first responders and city firefighters.
Tens of thousands of items related to public markets acquired by Penn alum David K. O’Neil create a collection unique in size and scope. Spanning four centuries from locations near and far, his collection now has a home at the Penn Libraries.