Reading view
UK Govt appoints 3 Cambridge academics to new net zero advisory council

Engineering Professor Julian Allwood (St Catharine's), Cambridge Zero Director Professor Emily Shuckburgh (Darwin) and Cambridge Energy Policy Research Group Director Emeritus Professor David Newbery (Churchill) join a panel of 17 expert advisors on STAC, which has been created to provide robust, scientific, evidence-based information to support key decisions as the UK overhauls its energy system to reach clean power by 2030.
The Council is expected to also offer independent viewpoints and cutting-edge research on topics from climate science, energy networks and engineering, to the latest technologies and artificial intelligence.
“Evidence-based decision-making is fundamental to the drive for clean power and tackling the climate crisis, with informed policymaking the key to securing a better, fairer world for current and future generations,” UK Energy Secretary Ed Miliband said in the Government’s announcement.
Professor Allwood is Professor of Engineering and the Environment at the University of Cambridge and directs the Use Less Group. Uniquely, his research aims to articulate a pathway to zero emissions based on technologies that already exist at scale. His projects include ground-breaking innovations such as electric cement.
Professor Shuckburgh is Director of Cambridge Zero, the University’s major climate change initiative. A mathematician and data scientist, Emily Shuckburgh is also Professor of Environmental Data Science at the Department of Computer Science and Technology, Academic Director of the Institute of Computing for Climate Science, and co-Director of the Centre for Landscape Regeneration and the UKRI Centre for Doctoral Training on the Application of AI to the study of Environmental Risks (AI4ER).
As a climate scientist, Professor Shuckburgh worked for more than a decade at the British Antarctic Survey where her work included leading a UK national research programme on the Southern Ocean and its role in climate.
Professor Newbery is the Director of the Cambridge Energy Policy Research Group, an Emeritus Professor of Economics at the Faculty of Economics and a Professorial Research Associate in the UCL Bartlett School of Environment, Energy and Resources, University College London.
STAC’s expert advice is expected to allow ministers to access the most up-to-date and well-informed scientific evidence, improving decision-making and effectiveness of policy implementation.
STAC is led by Professor Paul Monks, STAC Co-Chair and Chief Scientific Adviser & Director General, Department for Energy Security and Net Zero (DESNZ); and Professor David Greenwood FREng, STAC Co-Chair and CEO of Warwick Manufacturing Group (WMG) High Value Manufacturing Catapult Centre.
Read the government announcement here
Three Cambridge academics have been appointed to the UK Department for Energy Security and Net Zero’s new Science and Technology Advisory Council (STAC), which met for the first time on Wednesday 9 July, 2025.
The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.
Celebrating a legacy of growth and global impact
NUS marked its 120th anniversary with a grand gala dinner on 3 July 2025 at Marina Bay Sands, gathering close to 2,000 alumni, donors, partners and friends, including Guest-of-Honour Minister for Foreign Affairs Dr Vivian Balakrishnan (Medicine ’85, MMed ’91), Emeritus Senior Minister Goh Chok Tong (Arts '64, HonLLD '15), and Senior Minister of State for Health and Manpower Dr Koh Poh Koon (Medicine ’96, MMed ’03). The evening paid tribute to NUS’ astounding journey of service, innovation and impact since its founding in 1905.
From its roots as the Straits Settlements and Federated Malay States Government Medical School with only 23 students, NUS has grown into a globally renowned institution spanning 15 colleges, faculties and schools across three campuses.
In his opening remarks, NUS President Professor Tan Eng Chye (Science ’85) reflected on its founding purpose: "The Straits Settlements and Federated Malay States Government Medical School was the forerunner of the Yong Loo Lin School of Medicine, honouring a legacy of service from tending to the wounded in World War II to serving on the frontlines during the COVID-19 pandemic,” he shared. “Today, the School of Medicine is pushing the frontiers of groundbreaking research discoveries such as developing life-saving CAR-T cell immunotherapy to treat leukaemia, and the world’s first blood-based diagnostic test for early gastric cancer detection.”
Prof Tan noted that this same spirit of service still drives NUS today, which now educates over 7,000 undergraduates each year, producing graduates who contribute across society — including more than half of today’s Cabinet ministers.
Relevance in a changing world
Yet Prof Tan was clear-eyed about the challenges ahead. Rapid technological disruption, shifting student expectations and geopolitical uncertainties will test the University’s resilience and relevance. "To remain relevant, we must continuously adapt — renewing our value in each new generation, not only in how and what we teach, but how we lead and inspire," he emphasised.
Against this backdrop, Prof Tan reaffirmed the role of education as an intrinsic part of the Singapore social compact. “A significant number of our students and alumni are the first in their families to go to university. In providing opportunities to study at NUS, we are nurturing the best and brightest talents, uplifting families, and inspiring the next generation to realise their aspirations.”
In the last financial year, NUS received S$233 million in philanthropic gifts, including S$26 million earmarked to support students from low-income families through the Enhanced Financial Aid Scheme, benefitting around 3,000 undergraduates annually.
Addressing global health challenges
Next to address the guests was the Dean of NUS Medicine, Professor Chong Yap Seng (Medicine ’88, MD ’07), who highlighted how NUS’ founding mission — to meet public health needs — is just as critical today. He described an increasingly complex health landscape marked by geopolitical instability, climate change, the promises and threats of artificial intelligence and misinformation spread on social media.
"A whole-of-society, whole-of-planet approach is required more urgently than ever," he said, adding that "the combined efforts of people with diverse skills, expertise and perspectives will be vital to creating a healthier and more sustainable future."
A citadel with open gates
Guest-of-Honour Dr Vivian Balakrishnan, a former president and chairman of the NUS Students’ Union, spoke about how global volatility will inevitably affect academia and science, potentially leading to higher inflation, greater risks for smaller nations and a slowdown in innovation. He described NUS through three vivid metaphors: first, as a cradle nurturing Singapore’s national identity and unity since 1905; second, as a citadel with open gates — strong yet welcoming to talent and ideas.
"NUS needs to have open gates, and we need to have our fair share of access to talent and ideas, while still remembering that this is the citadel based in Singapore, to protect Singapore," he explained.
Lastly, he urged NUS to be a launch pad for new technological breakthroughs, ensuring Singapore and Asia do not get left behind in a fast-changing world. His words echoed NUS’ international outlook, which is supported by a global alumni network of nearly 390,000 across more than 100 countries.
A night of joy, gratitude and pride
The gala dinner was a lively celebration of NUS spirit and camaraderie. Guests were welcomed by a candle-lined walkway and a playful photo wall with handheld props featuring messages like “I love NUS” and “Where I Found My Tribe.” Student and alumni performances from NUS Dance Blast! and The Jazzlings provided entertainment during the dinner, while video segments highlighted NUS’ incredible growth and brought celebratory greetings from alumni across the world. The occasion was also truly global, with alumni travelling to Singapore from cities such as Tokyo, Jakarta, Yangon, Vancouver, London, and Melbourne.
Interactive exhibits filled the foyer, from the AiSee assistive technology demonstration by NUS Computing to a showcase of Duke-NUS young alumni leading in innovation. NUS Libraries also delighted guests with a fun campus landmark quiz. In a meaningful gesture, Dr Balakrishnan, Prof Tan, Prof Chong and Chief Alumni Officer Ms Ovidia Lim-Rajaram (Arts & Social Sciences ’89) unveiled and watered a Tembusu tree — a living symbol of resilience and growth.
"Tonight, we celebrate 120 years of NUS — but this evening is about so much more than a number. It’s about the remarkable journey of a humble medical school that has grown into one of the world’s leading universities," Ms Lim-Rajaram told the guests.
Celebrating milestones and a vision for tomorrow
The celebration also honoured key milestones across NUS’ schools and faculties, including the 120th anniversary of the Yong Loo Lin School of Medicine, 70 years of NUS Engineering, 60 years of the NUS Business School, 50 years of NUS Computing, 45 years of Kent Ridge Hall, 20 years of Duke-NUS Medical School and NUS High School, and 15 years of Tembusu College.
As the evening concluded with a cake-cutting ceremony, guests looked forward to a year of commemorative events — from the NUS120 Homecoming at Bukit Timah Campus, the Distinguished Speaker Series, the #NUSLife Photo Exhibition, NUS120 SuperNova, to Rag and Flag — that will continue to connect past, present and future generations. As NUS looks ahead to its next 120 years, it stands ready to nurture bold thinkers, responsible leaders and a community grounded in service and shared purpose.
By NUS Office of Alumni Relations
Forging bonds, changing lives: The human touch and the NUS Medicine journey
Commencement celebrations for more than 17,000 graduates from the NUS Class of 2025 have begun, with 35 ceremonies taking place at the University Cultural Centre from 10 to 21 July 2025.
In tribute to the University’s beginnings as a small medical school founded in 1905 to serve the community, the season’s first ceremony honours graduates of the Yong Loo Lin School of Medicine (NUS Medicine).
As the University and NUS Medicine celebrate their 120th anniversaries, these founding ideals and dedication to service have continued through the decades. Addressing graduates at the inaugural ceremony, NUS President Professor Tan Eng Chye underscored this enduring ethos of scholarship and service. “While we have grown and evolved, we remain steadfast in our mission to serve the needs of our country and society. As you move forward with purpose, passion, and confidence, you join a legacy of dreamers, changemakers, and trailblazers to shape a better future for all.”
Now embarking on their next chapter, three NUS Medicine graduates share the challenges and milestones that marked their path through university.
Sophie Xie: When presence is a present
For Sophie Xie Jia Lin, medicine is as much about human connection as it is about clinical knowledge.
Back in junior college, she was drawn to pursuing medicine as a career for its blend of academic rigour and deep human connection. Now undergoing supervised clinical practice in Internal Medicine at the National University Hospital, the Bachelor of Medicine and Bachelor of Surgery (MBBS) graduate recalls a recent encounter that affirmed her career choice.
“One of the patients whom I discharged came to give me a hug. She told me, ‘You are a good doctor and I love you because you made me feel cared for.’ I hadn’t thought much about what I’d done for her, but that was when I realised that being present can be as powerful as any procedure,” she said. “Moments like these really make my job a lot more meaningful.”
Her journey at NUS Medicine also took her beyond the wards and lecture halls. She participated in projects like NUS Medicine’s Receiving and Giving (RAG) and Flag performance in her first year, even when the COVID-19 pandemic meant recording choreography over Zoom.
In her second year, she co-led the production as part of the organising committee, drawing on her years of training in Chinese dance and her interest in choreography and film to direct the project. “These creative outlets have been essential in helping me stay grounded and reconnect with myself outside of medicine. I see a parallel between my journey in medicine and in the arts – just as I continue to grow in clinical knowledge, I also seek new ways to grow in my pursuits. I think the beauty of NUS Medicine lies in the fact that I had the freedom and opportunities to continue exploring and growing even outside of class,” said the former resident of King Edward VII Hall (KEVII), home to many students from NUS Medicine.
Sophie also teamed up with fellow medical student Joseph Lim, from the same graduating class of 2025, to produce two short films. “Filmmaking was a really interesting opportunity for me to see how different art mediums can convey a message and give the audience this very visceral feeling,” she said.
When she felt isolated during the COVID-19 pandemic, she leaned on her KEVII family. “There was a strong sense of camaraderie all across NUS Medicine. From clinical rotations to preparing for exams, we had each other’s backs,” she said.
Looking ahead, Sophie aims to pursue a career in Internal Medicine, where she hopes to provide continuity of care and spend meaningful time with patients, in line with her belief that medicine is, at its core, about human connection. “Medicine stimulated me intellectually. But more than that, it has given me the space to form deep human bonds,” she noted. “[Medicine] is a very humbling journey, with many things to learn. There’s an analogy that I try to practise: life is like hiking a mountain. When we look forward, it feels endless and tough, but when we take a pause and look back, we realise we have actually come such a long way.”
Ashlee Tan: Diving deep into medicine and sport
When her best friend said she wanted to be a paediatrician when she grew up, a young Ashlee Tan Yi Xuan adopted the same dream.
This fun childhood wish turned into a firm ambition as she grew older. One striking moment was a project in primary school, where she learnt about less fortunate children in impoverished parts of the world with no access to proper healthcare.
Seeing medicine as a way to contribute to society, Ashlee, who is also a national diver, applied to study MBBS at NUS Medicine. However, at her admission interview in 2019, someone advised her to pick between her first love — diving, which she had picked up at the age of 11 — and her medical studies.
But she took the plunge anyway. “I didn’t think it was possible to thrive at NUS Medicine while continuing to pursue competitive diving,” she said. “Everyone said medicine was too intense. As it turns out, if you really want it, you’ll find a way.”
And so she did, even taking a gap year in 2023 to train full-time. The decision came after the 2021 SEA Games in Hanoi, held right after her final exams in Year 3.
“It was brutal — studying and training simultaneously. I could not even fly to Hanoi together with the rest of my team because I still had practical exams,” she recalled. “We missed out on silver, but that reignited the fire in me and I realised that I was not ready to hang up my swimming suit.”
The gap year culminated in the 2024 World Aquatics Championships in Doha, where she narrowly missed out on qualifying for the Olympics. “It wasn’t the result I wanted, but it reminded me I could compete at that level. It gave me more confidence to tackle not just diving but medical school too,” she recounted.
Training 30 hours a week during her gap year and maintaining a lighter but consistent schedule after returning to school helped to hone her time management skills. Her strategies for managing her time well included doing up colour-coded physical timetables and task lists. Although she focused on full-time training, she still spent at least two hours studying every day, which helped make her final year more manageable.
“I’m very visual. Planning helped me stop panicking. And nothing beats the satisfaction of striking a task off my to-do list,” she explained.
Ashlee is still putting in the hours at the pool as she eyes the 2028 Olympics. Now undergoing clinical training at the internal medicine division of Ng Teng Fong General Hospital, she shared, “I’ve learned to make quality count more than quantity. It’s about pacing myself and training smart.”
One piece of advice she would give her younger self? “Go all in, even from the first year. Don’t assume anything is impossible,” she laughed.
Ang Jing Xuan: Seeing the person behind the patient
Caring is at the heart of nursing. For Ang Jing Xuan, that meant understanding the person and not just the condition.
This inspired him to take on a second major in Sociology while pursuing a Bachelor of Science in Nursing at the Yong Loo Lin School of Medicine’s Alice Lee Centre for Nursing Studies. One early clinical placement confirmed that he was on the right path.
“I was caring for an elderly patient who kept coming back for the same condition,” he recalled. “Medically, everything was right. But she told me, ‘It’s hard when you’re doing this alone.’ That hit me. It wasn’t just a physical problem, it was a social one too.”
Sociology gave him the vocabulary to articulate what he was observing — and the tools to respond. “Nursing taught me to notice the details of a patient’s condition, but Sociology taught me to notice the bigger picture,” he explained.
He applied this approach beyond the hospital wards, including during a 2024 summer internship at DBS, where he helped gamify a customer service initiative. “After watching how the bank managed change with empathy, patiently teaching their staff how to use new technology, I thought I could bring this human-centred perspective back to nursing,” he said.
Juggling a second major was challenging. “I’d overload my semesters just to feel productive,” he confessed. “But during exam season, panic would hit. My friends and I turned it into a late-night mission: staying up till 3am, quizzing each other and trying every trick to cram those slides into our heads. We were all over-caffeinated, half-delirious, but we got through it together.”
Now on the cusp of his nursing career, Jing Xuan is drawn to the adrenaline rush of the emergency wards and intensive care units. But he also hopes to use his training in Sociology to improve systems and support patients more holistically.
“I don’t just see a diagnosis anymore,” he said. “I see the person behind it — and all the structures that they’re navigating. That perspective has made me more thoughtful, more curious, and more committed to finding ways we can support people in their everyday lives. I also want to make a small but real change in nursing, perhaps by finding ways to streamline patient care or support staff during their most hectic shifts.”
This story is part of NUS News’ coverage of Commencement 2025, which celebrates the achievements of our graduates from the Class of 2025. For more on Commencement, read our stories and graduate profiles, check out the official Commencement website, or look up and tag #NUS2025 on our social media channels!
Banking on AI risks derailing net zero goals: report on energy costs of Big Tech

By 2040, the energy demands of the tech industry could be up to 25 times higher than today, with unchecked growth of data centres driven by AI expected to create surges in electricity consumption that will strain power grids and accelerate carbon emissions.
This is according to a new report from the University of Cambridge’s Minderoo Centre for Technology and Democracy, which suggests that even the most conservative estimate for big tech’s energy needs will see a five-fold increase over the next 15 years.
The idea that governments such as the UK can become leaders in AI while simultaneously meeting their net zero targets amounts to “magical thinking at the highest levels,” according to the report’s authors. The UK is committed to net zero greenhouse gas emissions by 2050.
Researchers call for global standards in reporting AI’s environmental cost through forums such as COP, the UN climate summit, and argue that the UK should advocate for this on the international stage while ensuring democratic oversight at home.
The report, published today, synthesises projections from leading consultancies to forecast the energy demands of the global tech industry. The researchers note that these projections are based on claims by tech firms themselves.
At the moment, data centres – the facilities that house servers for processing and storing data, along with cooling systems preventing this hardware from overheating – account for nearly 1.5% of global emissions.
This figure is expected to grow by 15-30% each year to reach 8% of total global greenhouse gas emissions by 2040, write the report’s authors. They point out that this would far exceed current emissions from air travel.
The report highlights that in the US, China, and Europe, data centres already consume around 2-4% of national electricity, with regional concentrations becoming extreme. For example, up to 20% of all power in Ireland now goes to data centres in Dublin’s cluster.
“We know the environmental impact of AI will be formidable, but tech giants are deliberately vague about the energy requirements implicit in their aims,” said Bhargav Srinivasa Desikan, the report’s lead author from Cambridge’s Minderoo Centre.
“The lack of hard data on electricity and water consumption as well as associated carbon emissions of digital technology leaves policymakers and researchers in the dark about the climate harms AI might cause.”
“We need to see urgent action from governments to prevent AI from derailing climate goals, not just deferring to tech companies on the promise of economic growth,” said Desikan.
The researchers also use data from corporate press releases and ESG reports of some of the world’s tech giants to show the alarming trajectory of energy use before the AI race had fully kicked into gear.
Google’s reported greenhouse gas emissions rose by 48% between 2019 and 2023, while Microsoft’s reported emissions increased by nearly 30% from 2020 to 2023. Amazon’s carbon footprint grew around 40% between 2019 and 2021, and – while it has begun to fall – remains well above 2019 levels.
This self-reported data is contested, note the researchers, and some independent reporting suggests that actual emissions from tech companies are much higher.
Several tech giants are looking to nuclear power to defuse the energy timebomb at the heart of their ambitions. Sam Altman, CEO of OpenAI, has argued that fusion is needed to meet AI’s potential, while Meta have said that nuclear energy can ‘provide firm, baseload power’ to supply their data centres.
Microsoft have even signed a 20-year agreement to reactivate the Three Mile Island plant – site of the worst nuclear accident in US history.
Some tech leaders, such as former Google CEO Eric Schmidt, argue that environmental costs of AI will be offset by its benefits for the climate crisis – from contributing to scientific breakthroughs in green energy to enhanced climate change modelling.
“Despite the rapacious energy demands of AI, tech companies encourage governments to see these technologies as accelerators for the green transition,” said Professor Gina Neff, Executive Director of the Minderoo Centre for Technology and Democracy.
“These claims appeal to governments banking on AI to grow the economy, but they may compromise society's climate commitments.”
“Big Tech is blowing past their own climate goals, while they rely heavily on renewable energy certificates and carbon offsets rather than reducing their emissions,” said Prof Neff.
“Generative AI may be helpful for designing climate solutions, but there is a real risk that emissions from the AI build-out will outstrip any climate gains as tech companies abandon net zero goals and pursue huge AI-driven profits.”
The report calls for the UK’s environmental policies to be updated for the ‘AI era’. Recommendations include adding AI’s energy footprint into national decarbonisation plans, with specific carbon reduction targets for data centres and AI services, and requirements for detailed reporting of energy and water consumption.
Ofgem should set strict energy efficiency targets for data centres, write the report’s authors, while the Department for Energy Security and Net Zero and the Department for Science, Innovation and Technology should tie AI research funding and data centre operations to clean power adoption.
The report’s authors note that that UK’s new AI Energy Council currently consists entirely of energy bodies and tech companies – with no representation for communities, climate groups or civil society.
“Energy grids are already stretched,” said Professor John Naughton, Chair of the Advisory Board at the Minderoo Centre for Technology and Democracy.
“Every megawatt allocated to AI data centres will be a megawatt unavailable for housing or manufacturing. Governments need to be straight with the public about the inevitable energy trade-offs that will come with doubling down on AI as an engine of economic growth.”
With countries such as the UK declaring ambitious goals for both AI leadership and decarbonisation, a new report suggests that AI could drive a 25-fold increase in the global tech sector’s energy use.
The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.
Large-scale DNA study maps 37,000 years of human disease history

A new study suggests that our ancestors’ close cohabitation with domesticated animals and large-scale migrations played a key role in the spread of infectious diseases.
The team, led by Professor Eske Willerslev at the Universities of Cambridge and Copenhagen, recovered ancient DNA from 214 known human pathogens in prehistoric humans from Eurasia.
They found that the earliest evidence of zoonotic diseases – illnesses transmitted from animals to humans, like COVID in recent times – dates back to around 6,500 years ago, with these diseases becoming more widespread approximately 5,000 years ago.
The study detected the world’s oldest genetic trace of the plague bacterium, Yersinia pestis, in a 5,500-year-old sample. The plague is estimated to have killed between one-quarter and one-half of Europe’s population during the Middle Ages.
In addition, the researchers found traces of many other diseases including:
Malaria (Plasmodium vivax) – 4,200 years ago
Leprosy (Mycobacterium leprae) – 1,400 years ago
Hepatitis B virus – 9,800 years ago
Diphtheria (Corynebacterium diphtheriae) – 11,100 years ago
This is the largest study to date on the history of infectious diseases and is published today in the journal Nature.
The researchers analysed DNA from over 1,300 prehistoric humans, some up to 37,000 years old. The ancient bones and teeth have provided a unique insight into the development of diseases caused by bacteria, viruses, and parasites.
“We’ve long suspected that the transition to farming and animal husbandry opened the door to a new era of disease – now DNA shows us that it happened at least 6,500 years ago,” said Willerslev.
He added: “These infections didn’t just cause illness – they may have contributed to population collapse, migration, and genetic adaptation.”
The significant increase in the incidence of zoonoses around 5,000 years ago coincides with a migration to north-western Europe from the Pontic Steppe – that is from parts of present-day Ukraine, south-western Russia and western Kazakhstan. The people embarking on this migration – and who to a large extent passed on the genetic profile found among people in north-western Europe today – belonged to the Yamnaya herders.
The findings could be significant for the development of vaccines and for understanding how diseases arise and mutate over time.
“If we understand what happened in the past, it can help us prepare for the future. Many of the newly emerging infectious diseases are predicted to originate from animals,” said Associate Professor Martin Sikora at the University of Copenhagen, and first author of the report.
Willerslev added: “Mutations that were successful in the past are likely to reappear. This knowledge is important for future vaccines, as it allows us to test whether current vaccines provide sufficient coverage or whether new ones need to be developed due to mutations.”
The sample material was primarily provided by museums in Europe and Asia. The samples were partly extracted from teeth, where the enamel acts as a lid that can protect the DNA against degradation as a result of the ravages of time. The rest of the DNA was primarily extracted from petrosa bones - the hardest bone in humans - located on the inside of the skull.
The research was funded by the Lundbeck Foundation.
Reference
Sikora, M et al: ‘The spatiotemporal distribution of human pathogens in ancient Eurasia.’ Nature, July 2025. DOI: 10.1038/s41586-025-09192-8
Adapted from a press release by the University of Copenhagen.
Researchers have mapped the spread of infectious diseases in humans across millennia, to reveal how human-animal interactions permanently transformed our health today.
The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.
Empowering tomorrow’s economic leaders in a divided world
In a time marked by rising global tensions, fractured trade relations, and deepening social and economic divides, the National Economics and Financial Management Challenge (NEFMC) 2025, organised by the NUS Economics Society (ECS), brought together nearly 1,000 pre-tertiary students in June to grapple with a question that could define their generation: What does economic resilience look like in a world that’s increasingly divided?
This year’s theme, “Economic Resilience in a Divided World”, reflects a reality that young people are already living through — a world where geopolitical instability, climate shocks, and market fragmentation are no longer abstract headlines, but defining features of the systems they will one day lead.
Hridayansh Khera, the incoming 63rd ECS President and a first-year Faculty of Arts and Social Sciences Economics major, remarked at the opening ceremony, “The theme couldn’t be more timely. Our participants aren’t just solving economic problems — they’re building the mindset needed to lead through disruption.”
A landmark year of record participation, esteemed collaborations and thought leadership
Organised annually by ECS, NEFMC 2025 saw its biggest edition yet with participation expanding nearly threefold in scale this year compared to previous iterations. This year also marked a major milestone, with the Monetary Authority of Singapore (MAS) joining as the event’s key sponsor for the first time, underscoring the challenge’s increasing significance in the national pre-tertiary academic landscape.
The competition scored another first when it hosted Professor Lars Peter Hansen, the 2013 Nobel Laureate in Economic Sciences from the University of Chicago, for a virtual keynote dialogue on 10 June 2025 titled “Scientific Uncertainty in Climate Policy: Conceal or Confront?”.
At the session, Prof Hansen emphasised that uncertainty in climate models should not be a justification for inaction but directly integrated into policymaking. He cautioned against the risks of overconfidence in the modelling process and explained why effective policies must strike a balance between preparation for worst-case scenarios and waiting for better data.
During the competition, students across three regional rounds tackled real-world economic challenges through case analyses, policy strategy and live presentations. The first two rounds saw teams of four or five competing in an online assessment that tested their understanding of economics and finance fundamentals, followed by a research analysis of the effects of geopolitical tensions on supply chain dynamics. The top six teams entered the final round and presented their cases for how low- and middle-income countries can navigate geopolitical shocks in areas such as trade finance, energy transition, and wealth and income inequality.
The esteemed panel of judges included Mr Joe Hooper, Director of the United Nations Development Programme (UNDP) Global Centre for Technology, Innovation and Sustainable Development; Mr Eduardo Pedrosa, Executive Director of the Asia-Pacific Economic Cooperation Secretariat; Mr Richard Stein, Managing Director at Goldman Sachs; and Ms Benish Aslam, Regional Lead of Government Affairs and Policy at The Asia Pacific Medical Technology Association (APACMed).
After intense deliberation, the judging panel awarded the top prize to Team Financial Decimators from NUS High School of Mathematics and Science. Their presentation stood out for its detailed analysis and strong, evidence-based policy recommendations tailored to the needs of developing countries. The winning team represented Singapore at the Australasian Economics Olympiad last week and emerged in second place in the team category!
Building a green Singapore
The finals also saw Mr Stanley Loh, Permanent Secretary at the Ministry of Sustainability and the Environment, deliver his keynote address titled “Sustainable Singapore: Building a Green Economy in an Ever-Evolving World”. Mr Loh, who is also an alumnus of NUS Economics and former ECS President, then participated in a fireside chat with Mr Khera, discussing the impact of geopolitical tensions on the development of a green economy and some of his personal experiences from when he was a student.
Mr Loh laid out the stark implications of climate inaction, warning that sea levels could rise by up to 1.15m by 2100 – a scenario that would place large parts of Singapore at serious risk. He noted that temperatures could rise by up to 5°C, with today’s coolest month (January) being warmer than the hottest month (May) in the 1960s. Beyond these extremes, he emphasised the severe externalities climate change could bring, including detrimental impacts on public health, infrastructure, and food systems.
Global coordination, he reminded participants, is essential, with the global community leveraging the diverse relative strengths of nations to create a more efficient and globally coordinated response to the climate crisis. “Each country brings something different — some have advantages in alternative energy, others in R&D or systemic implementation. We need to tap on comparative advantage in climate action too.”
Inspiring young economists
Reflecting on the experience, Zhu Yancun from Team Financial Decimators noted how the NEFMC was an incredible opportunity for the team to explore the exciting intersection of economics, geopolitics, and finance. “We worked across multiple time zones during the competition period because we were in different countries across the competition period. The challenges of collaborating online only strengthened our bond as a team and made our success more rewarding. We now walk away inspired by the role economists can play in building a more sustainable and equitable world.”
At the closing ceremony, outgoing and 62nd ECS President Colin Chow shared, “At its core, NEFMC has always been more than just observing policy from the sidelines. It’s about sparking curiosity, challenging assumptions, and empowering students to reshape the way we think about policy. I believe that the best ideas come when students are given the space to experiment and reimagine—and it’s been incredibly rewarding to watch this cohort do exactly that.”
By the NUS Economics Society at NUS Faculty of Arts and Social Sciences
The historical literature behind Shaharom Husain's work aims to educate
By Dr Azhar Ibrahim Alwee, Senior Lecturer from the Department of Malay Studies, Faculty of Arts and Social Sciences at NUS
A walking elegy, tiny gallery, and gentle Brutalism
A walking elegy, tiny gallery, and gentle Brutalism

Photo illustration by Liz Zonarich/Harvard Staff
Photography professor recommends 3 local spots to find beauty, solace
First installment of “Favorite Things,” a new series in which Harvard faculty share a few of theirs. Robin Kelsey is the Shirley Carter Burden Professor of Photography, History of Photography and American Art.
Favorite place to walk
Mount Auburn Cemetery
You can commune with the dead, with migrating birds, and with ancient trees. Each time I visit, I treasure the chance to say hello to departed friends, from vital contemporaries who left us too soon (e.g., our faculty colleague Svetlana Boym) to those long-gone but thrilling us still (e.g., Winslow Homer). When I need to restore myself, there is no better place in Cambridge than this path-winding refuge with its massive, straight-boled oaks.
Favorite art gallery
Anthony Greaney
What better place to find contemporary art than up creaking stairs in a dilapidated warehouse by the Market Basket in Somerville? The space is tiny, the light soft and exquisite, and the curation distinguished by its perspicacity and care.
Favorite building on campus
The Carpenter Center
To call this building “Brutalist” may abide by textbook definitions but feels utterly inapt to me. I find the Carpenter Center inviting (who can resist that ramp?) and aspirational. The sight lines, the terrace, the cool concrete shadows on a hot summer day. Beautiful!
— As told to Sy Boles/Harvard Staff Writer
Changing the conversation in health care
Generative artificial intelligence is transforming the ways humans write, read, speak, think, empathize, and act within and across languages and cultures. In health care, gaps in communication between patients and practitioners can worsen patient outcomes and prevent improvements in practice and care. The Language/AI Incubator, made possible through funding from the MIT Human Insight Collaborative (MITHIC), offers a potential response to these challenges.
The project envisions a research community rooted in the humanities that will foster interdisciplinary collaboration across MIT to deepen understanding of generative AI’s impact on cross-linguistic and cross-cultural communication. The project’s focus on health care and communication seeks to build bridges across socioeconomic, cultural, and linguistic strata.
The incubator is co-led by Leo Celi, a physician and the research director and senior research scientist with the Institute for Medical Engineering and Science (IMES), and Per Urlaub, professor of the practice in German and second language studies and director of MIT’s Global Languages program.
“The basis of health care delivery is the knowledge of health and disease,” Celi says. “We’re seeing poor outcomes despite massive investments because our knowledge system is broken.”
A chance collaboration
Urlaub and Celi met during a MITHIC launch event. Conversations during the event reception revealed a shared interest in exploring improvements in medical communication and practice with AI.
“We’re trying to incorporate data science into health-care delivery,” Celi says. “We’ve been recruiting social scientists [at IMES] to help advance our work, because the science we create isn’t neutral.”
Language is a non-neutral mediator in health care delivery, the team believes, and can be a boon or barrier to effective treatment. “Later, after we met, I joined one of his working groups whose focus was metaphors for pain: the language we use to describe it and its measurement,” Urlaub continues. “One of the questions we considered was how effective communication can occur between doctors and patients.”
Technology, they argue, impacts casual communication, and its impact depends on both users and creators. As AI and large language models (LLMs) gain power and prominence, their use is broadening to include fields like health care and wellness.
Rodrigo Gameiro, a physician and researcher with MIT’s Laboratory for Computational Physiology, is another program participant. He notes that work at the laboratory centers responsible AI development and implementation. Designing systems that leverage AI effectively, particularly when considering challenges related to communicating across linguistic and cultural divides that can occur in health care, demands a nuanced approach.
“When we build AI systems that interact with human language, we’re not just teaching machines how to process words; we’re teaching them to navigate the complex web of meaning embedded in language,” Gameiro says.
Language’s complexities can impact treatment and patient care. “Pain can only be communicated through metaphor,” Urlaub continues, “but metaphors don’t always match, linguistically and culturally.” Smiley faces and one-to-10 scales — pain measurement tools English-speaking medical professionals may use to assess their patients — may not travel well across racial, ethnic, cultural, and language boundaries.
“Science has to have a heart”
LLMs can potentially help scientists improve health care, although there are some systemic and pedagogical challenges to consider. Science can focus on outcomes to the exclusion of the people it’s meant to help, Celi argues. “Science has to have a heart,” he says. “Measuring students’ effectiveness by counting the number of papers they publish or patents they produce misses the point.”
The point, Urlaub says, is to investigate carefully while simultaneously acknowledging what we don’t know, citing what philosophers call Epistemic Humility. Knowledge, the investigators argue, is provisional, and always incomplete. Deeply held beliefs may require revision in light of new evidence.
“No one’s mental view of the world is complete,” Celi says. “You need to create an environment in which people are comfortable acknowledging their biases.”
“How do we share concerns between language educators and others interested in AI?” Urlaub asks. “How do we identify and investigate the relationship between medical professionals and language educators interested in AI’s potential to aid in the elimination of gaps in communication between doctors and patients?”
Language, in Gameiro’s estimation, is more than just a tool for communication. “It reflects culture, identity, and power dynamics,” he says. In situations where a patient might not be comfortable describing pain or discomfort because of the physician’s position as an authority, or because their culture demands yielding to those perceived as authority figures, misunderstandings can be dangerous.
Changing the conversation
AI’s facility with language can help medical professionals navigate these areas more carefully, providing digital frameworks offering valuable cultural and linguistic contexts in which patient and practitioner can rely on data-driven, research-supported tools to improve dialogue. Institutions need to reconsider how they educate medical professionals and invite the communities they serve into the conversation, the team says.
‘We need to ask ourselves what we truly want,” Celi says. “Why are we measuring what we’re measuring?” The biases we bring with us to these interactions — doctors, patients, their families, and their communities — remain barriers to improved care, Urlaub and Gameiro say.
“We want to connect people who think differently, and make AI work for everyone,” Gameiro continues. “Technology without purpose is just exclusion at scale.”
“Collaborations like these can allow for deep processing and better ideas,” Urlaub says.
Creating spaces where ideas about AI and health care can potentially become actions is a key element of the project. The Language/AI Incubator hosted its first colloquium at MIT in May, which was led by Mena Ramos, a physician and the co-founder and CEO of the Global Ultrasound Institute.
The colloquium also featured presentations from Celi, as well as Alfred Spector, a visiting scholar in MIT’s Department of Electrical Engineering and Computer Science, and Douglas Jones, a senior staff member in the MIT Lincoln Laboratory’s Human Language Technology Group. A second Language/AI Incubator colloquium is planned for August.
Greater integration between the social and hard sciences can potentially increase the likelihood of developing viable solutions and reducing biases. Allowing for shifts in the ways patients and doctors view the relationship, while offering each shared ownership of the interaction, can help improve outcomes. Facilitating these conversations with AI may speed the integration of these perspectives.
“Community advocates have a voice and should be included in these conversations,” Celi says. “AI and statistical modeling can’t collect all the data needed to treat all the people who need it.”
Community needs and improved educational opportunities and practices should be coupled with cross-disciplinary approaches to knowledge acquisition and transfer. The ways people see things are limited by their perceptions and other factors. “Whose language are we modeling?” Gameiro asks about building LLMs. “Which varieties of speech are being included or excluded?” Since meaning and intent can shift across those contexts, it’s important to remember these when designing AI tools.
“AI is our chance to rewrite the rules”
While there’s lots of potential in the collaboration, there are serious challenges to overcome, including establishing and scaling the technological means to improve patient-provider communication with AI, extending opportunities for collaboration to marginalized and underserved communities, and reconsidering and revamping patient care.
But the team isn’t daunted.
Celi believes there are opportunities to address the widening gap between people and practitioners while addressing gaps in health care. “Our intent is to reattach the string that’s been cut between society and science,” he says. “We can empower scientists and the public to investigate the world together while also acknowledging the limitations engendered in overcoming their biases.”
Gameiro is a passionate advocate for AI’s ability to change everything we know about medicine. “I’m a medical doctor, and I don’t think I’m being hyperbolic when I say I believe AI is our chance to rewrite the rules of what medicine can do and who we can reach,” he says.
“Education changes humans from objects to subjects,” Urlaub argues, describing the difference between disinterested observers and active and engaged participants in the new care model he hopes to build. “We need to better understand technology’s impact on the lines between these states of being.”
Celi, Gameiro, and Urlaub each advocate for MITHIC-like spaces across health care, places where innovation and collaboration are allowed to occur without the kinds of arbitrary benchmarks institutions have previously used to mark success.
“AI will transform all these sectors,” Urlaub believes. “MITHIC is a generous framework that allows us to embrace uncertainty with flexibility.”
“We want to employ our power to build community among disparate audiences while admitting we don’t have all the answers,” Celi says. “If we fail, it’s because we failed to dream big enough about how a reimagined world could look.”
© Image: iStock
AI shapes autonomous underwater “gliders”
Marine scientists have long marveled at how animals like fish and seals swim so efficiently despite having different shapes. Their bodies are optimized for efficient, hydrodynamic aquatic navigation so they can exert minimal energy when traveling long distances.
Autonomous vehicles can drift through the ocean in a similar way, collecting data about vast underwater environments. However, the shapes of these gliding machines are less diverse than what we find in marine life — go-to designs often resemble tubes or torpedoes, since they’re fairly hydrodynamic as well. Plus, testing new builds requires lots of real-world trial-and-error.
Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the University of Wisconsin at Madison propose that AI could help us explore uncharted glider designs more conveniently. Their method uses machine learning to test different 3D designs in a physics simulator, then molds them into more hydrodynamic shapes. The resulting model can be fabricated via a 3D printer using significantly less energy than hand-made ones.
The MIT scientists say that this design pipeline could create new, more efficient machines that help oceanographers measure water temperature and salt levels, gather more detailed insights about currents, and monitor the impacts of climate change. The team demonstrated this potential by producing two gliders roughly the size of a boogie board: a two-winged machine resembling an airplane, and a unique, four-winged object resembling a flat fish with four fins.
Peter Yichen Chen, MIT CSAIL postdoc and co-lead researcher on the project, notes that these designs are just a few of the novel shapes his team’s approach can generate. “We’ve developed a semi-automated process that can help us test unconventional designs that would be very taxing for humans to design,” he says. “This level of shape diversity hasn’t been explored previously, so most of these designs haven’t been tested in the real world.”
But how did AI come up with these ideas in the first place? First, the researchers found 3D models of over 20 conventional sea exploration shapes, such as submarines, whales, manta rays, and sharks. Then, they enclosed these models in “deformation cages” that map out different articulation points that the researchers pulled around to create new shapes.
The CSAIL-led team built a dataset of conventional and deformed shapes before simulating how they would perform at different “angles-of-attack” — the direction a vessel will tilt as it glides through the water. For example, a swimmer may want to dive at a -30 degree angle to retrieve an item from a pool.
These diverse shapes and angles of attack were then used as inputs for a neural network that essentially anticipates how efficiently a glider shape will perform at particular angles and optimizes it as needed.
Giving gliding robots a lift
The team’s neural network simulates how a particular glider would react to underwater physics, aiming to capture how it moves forward and the force that drags against it. The goal: find the best lift-to-drag ratio, representing how much the glider is being held up compared to how much it’s being held back. The higher the ratio, the more efficiently the vehicle travels; the lower it is, the more the glider will slow down during its voyage.
Lift-to-drag ratios are key for flying planes: At takeoff, you want to maximize lift to ensure it can glide well against wind currents, and when landing, you need sufficient force to drag it to a full stop.
Niklas Hagemann, an MIT graduate student in architecture and CSAIL affiliate, notes that this ratio is just as useful if you want a similar gliding motion in the ocean.
“Our pipeline modifies glider shapes to find the best lift-to-drag ratio, optimizing its performance underwater,” says Hagemann, who is also a co-lead author on a paper that was presented at the International Conference on Robotics and Automation in June. “You can then export the top-performing designs so they can be 3D-printed.”
Going for a quick glide
While their AI pipeline seemed realistic, the researchers needed to ensure its predictions about glider performance were accurate by experimenting in more lifelike environments.
They first fabricated their two-wing design as a scaled-down vehicle resembling a paper airplane. This glider was taken to MIT’s Wright Brothers Wind Tunnel, an indoor space with fans that simulate wind flow. Placed at different angles, the glider’s predicted lift-to-drag ratio was only about 5 percent higher on average than the ones recorded in the wind experiments — a small difference between simulation and reality.
A digital evaluation involving a visual, more complex physics simulator also supported the notion that the AI pipeline made fairly accurate predictions about how the gliders would move. It visualized how these machines would descend in 3D.
To truly evaluate these gliders in the real world, though, the team needed to see how their devices would fare underwater. They printed two designs that performed the best at specific points-of-attack for this test: a jet-like device at 9 degrees and the four-wing vehicle at 30 degrees.
Both shapes were fabricated in a 3D printer as hollow shells with small holes that flood when fully submerged. This lightweight design makes the vehicle easier to handle outside of the water and requires less material to be fabricated. The researchers placed a tube-like device inside these shell coverings, which housed a range of hardware, including a pump to change the glider’s buoyancy, a mass shifter (a device that controls the machine’s angle-of-attack), and electronic components.
Each design outperformed a handmade torpedo-shaped glider by moving more efficiently across a pool. With higher lift-to-drag ratios than their counterpart, both AI-driven machines exerted less energy, similar to the effortless ways marine animals navigate the oceans.
As much as the project is an encouraging step forward for glider design, the researchers are looking to narrow the gap between simulation and real-world performance. They are also hoping to develop machines that can react to sudden changes in currents, making the gliders more adaptable to seas and oceans.
Chen adds that the team is looking to explore new types of shapes, particularly thinner glider designs. They intend to make their framework faster, perhaps bolstering it with new features that enable more customization, maneuverability, or even the creation of miniature vehicles.
Chen and Hagemann co-led research on this project with OpenAI researcher Pingchuan Ma SM ’23, PhD ’25. They authored the paper with Wei Wang, a University of Wisconsin at Madison assistant professor and recent CSAIL postdoc; John Romanishin ’12, SM ’18, PhD ’23; and two MIT professors and CSAIL members: lab director Daniela Rus and senior author Wojciech Matusik. Their work was supported, in part, by a Defense Advanced Research Projects Agency (DARPA) grant and the MIT-GIST Program.
© Image courtesy of the researchers.
Collaborating with the force of nature
Common sense tells us to run from molten lava flowing from active volcanoes. But MIT professors J. Jih, Cristina Parreño Alonso, and Skylar Tibbits — faculty in the Department of Architecture at the School of Architecture and Planning — have their bags packed to head to southwest Iceland in anticipation of an imminent volcanic eruption. The Nordic island nation is currently experiencing a period of intense seismic activity; seven volcanic eruptions have taken place in its southern peninsula in under a year.
Earlier this year, the faculty built and placed a series of lightweight, easily deployable steel structures close to the volcano, where a few of the recent eruptions have taken place; several more structures are on trucks waiting to be delivered to sites where fissures open and lava oozes out. Cameras are in place to record what happens when the lava meets and hits these structures to help understand the lava flows.
This new research explores what type of shapes and materials can be used to interact with lava and successfully divert it from heading in the direction of habitats or critical infrastructure that lie in its path. Their work is supported by a Professor Amar. G. Bose Research Grant.
“We’re trying to imagine new ways of conceptualizing infrastructure when it relates to lava and volcanic eruptions,” says Jih, an associate professor of the practice. “Lovely for us as designers, physical prototyping is the only way you can test some of these ideas out.”
Currently, the Icelandic Department of Civic Protection and Emergency Management and an engineering group, EFLA, are diverting the lava with massive berms (approximately 44 to 54 yards in length and 9 yards in height) made from earth and stone.
Berms protecting the town of Grindavik, a power plant, and the popular Blue Lagoon geothermal spa have met with mixed results. In November 2024, a volcano erupted for the seventh time in less than a year, forcing the evacuation of town residents and the Blue Lagoon’s guests and employees. The latter’s parking lot was consumed by lava.
Sigurdur Thorsteinsson, chief brand, design, and innovation officer of the Blue Lagoon, as well as a designer and a partner in Design Group Italia, was on site for this eruption and several others.
“Some magma went into the city of Grindavik and three or four houses were destroyed,” says Thorsteinsson. “One of our employees watched her house go under magma on television, which was an emotional moment.”
While staff at the Blue Lagoon have become very efficient at evacuating guests, says Thorsteinsson, each eruption forces the tourist destination to close and townspeople to evacuate, disrupting lives and livelihoods.
“You cannot really stop the magma,” says Thorsteinsson, who is working with the MIT faculty on this research project. “It’s too powerful.”
Tibbits, associate professor of design research and founder and co-director of the Self-Assembly Lab, agrees. His research explores how to guide or work with the forces of nature.
Last year, Tibbits and Jih were in Iceland on another research project when erupting volcanoes interrupted their work. The two started thinking about how the lava could be redirected.
“The question is: Can we find more strategic interventions in the field that could work with the lava, rather than fight it?” says Tibbits.
To investigate what kinds of materials would withstand this type of interaction, they invited Parreño Alonso, a senior lecturer in the Department of Architecture, to join them.
“Cristina, being the department authority on magma, was an obvious and important partner for us,” says Jih with a smile.
Parreño Alonso has been working with volcanic rock for years and taught a series of design studios exploring volcanic rock as an architectural material. She also has proposed designing structures to engage directly with lava flows and recently has been examining volcanic rock in a molten state and melting basalt in MIT’s foundry with Michael Tarkanian, a senior lecturer in MIT’s Department of Materials Science and Engineering, and Metals Lab director. For this project, she is exploring the potential of molten rock as a substitute for concrete, a widely used material because of its pliability.
“It’s exciting how this idea of working with volcanoes was taking shape in parallel, from different angles, within the same department,” says Parreño Alonso. “I love how these parallel interests have led to such a beautiful collaboration.”
She also sees other opportunities by collaborating with these forces of nature.
“We are interested in the potential of generating something out of the interaction with the lava,” she says. “Could it be a landscape that becomes a park? There are many possibilities.”
The steel structures were first tested at MIT’s Metals Lab with Tarkanian and then built onsite in Iceland. The team wanted to make the structures lightweight so they could be quickly set up in the field, but strong enough so they wouldn’t be easily destroyed. Various designs were created; this iteration of the design has V-shaped structures that can guide the lava to flow around them, or they can be reconfigured as ramps or tunnels.
“There is a road that has been hit by many of the recent eruptions and must keep being rebuilt,” says Tibbits. “We created two ramps that could in the future serve as tunnels, allowing the lava to flow over the road and create a type of lava cave where the cars could drive under the cooled lava.”
Tibbits says they see the structures in the field now as an initial intervention. After documenting and studying how they interact with the lava, the architects will develop new iterations of what they believe will eventually become critical infrastructure for locations around the world with active volcanoes.
“If we can show and prove what kinds of shapes and structures and what kinds of materials can divert magma flows, I think it’s incredibly valuable research,” says Thorsteinsson.
Thorsteinsson lives in Italy half of the year and says the volcanoes there — Mount Etna in Sicily and Mount Vesuvius in the Gulf of Naples — pose a greater danger than those in Iceland because of the densely populated neighborhoods nearby. Volcanoes in Hawaii and Japan are in similarly populated areas.
“Whatever information you can learn about diverting magma flows to other directions and what kinds of structures are needed — it would be priceless,” he says.
© Photo: Marino Thorlacious
AI shapes autonomous underwater “gliders”
Marine scientists have long marveled at how animals like fish and seals swim so efficiently despite having different shapes. Their bodies are optimized for efficient, hydrodynamic aquatic navigation so they can exert minimal energy when traveling long distances.
Autonomous vehicles can drift through the ocean in a similar way, collecting data about vast underwater environments. However, the shapes of these gliding machines are less diverse than what we find in marine life — go-to designs often resemble tubes or torpedoes, since they’re fairly hydrodynamic as well. Plus, testing new builds requires lots of real-world trial-and-error.
Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the University of Wisconsin at Madison propose that AI could help us explore uncharted glider designs more conveniently. Their method uses machine learning to test different 3D designs in a physics simulator, then molds them into more hydrodynamic shapes. The resulting model can be fabricated via a 3D printer using significantly less energy than hand-made ones.
The MIT scientists say that this design pipeline could create new, more efficient machines that help oceanographers measure water temperature and salt levels, gather more detailed insights about currents, and monitor the impacts of climate change. The team demonstrated this potential by producing two gliders roughly the size of a boogie board: a two-winged machine resembling an airplane, and a unique, four-winged object resembling a flat fish with four fins.
Peter Yichen Chen, MIT CSAIL postdoc and co-lead researcher on the project, notes that these designs are just a few of the novel shapes his team’s approach can generate. “We’ve developed a semi-automated process that can help us test unconventional designs that would be very taxing for humans to design,” he says. “This level of shape diversity hasn’t been explored previously, so most of these designs haven’t been tested in the real world.”
But how did AI come up with these ideas in the first place? First, the researchers found 3D models of over 20 conventional sea exploration shapes, such as submarines, whales, manta rays, and sharks. Then, they enclosed these models in “deformation cages” that map out different articulation points that the researchers pulled around to create new shapes.
The CSAIL-led team built a dataset of conventional and deformed shapes before simulating how they would perform at different “angles-of-attack” — the direction a vessel will tilt as it glides through the water. For example, a swimmer may want to dive at a -30 degree angle to retrieve an item from a pool.
These diverse shapes and angles of attack were then used as inputs for a neural network that essentially anticipates how efficiently a glider shape will perform at particular angles and optimizes it as needed.
Giving gliding robots a lift
The team’s neural network simulates how a particular glider would react to underwater physics, aiming to capture how it moves forward and the force that drags against it. The goal: find the best lift-to-drag ratio, representing how much the glider is being held up compared to how much it’s being held back. The higher the ratio, the more efficiently the vehicle travels; the lower it is, the more the glider will slow down during its voyage.
Lift-to-drag ratios are key for flying planes: At takeoff, you want to maximize lift to ensure it can glide well against wind currents, and when landing, you need sufficient force to drag it to a full stop.
Niklas Hagemann, an MIT graduate student in architecture and CSAIL affiliate, notes that this ratio is just as useful if you want a similar gliding motion in the ocean.
“Our pipeline modifies glider shapes to find the best lift-to-drag ratio, optimizing its performance underwater,” says Hagemann, who is also a co-lead author on a paper that was presented at the International Conference on Robotics and Automation in June. “You can then export the top-performing designs so they can be 3D-printed.”
Going for a quick glide
While their AI pipeline seemed realistic, the researchers needed to ensure its predictions about glider performance were accurate by experimenting in more lifelike environments.
They first fabricated their two-wing design as a scaled-down vehicle resembling a paper airplane. This glider was taken to MIT’s Wright Brothers Wind Tunnel, an indoor space with fans that simulate wind flow. Placed at different angles, the glider’s predicted lift-to-drag ratio was only about 5 percent higher on average than the ones recorded in the wind experiments — a small difference between simulation and reality.
A digital evaluation involving a visual, more complex physics simulator also supported the notion that the AI pipeline made fairly accurate predictions about how the gliders would move. It visualized how these machines would descend in 3D.
To truly evaluate these gliders in the real world, though, the team needed to see how their devices would fare underwater. They printed two designs that performed the best at specific points-of-attack for this test: a jet-like device at 9 degrees and the four-wing vehicle at 30 degrees.
Both shapes were fabricated in a 3D printer as hollow shells with small holes that flood when fully submerged. This lightweight design makes the vehicle easier to handle outside of the water and requires less material to be fabricated. The researchers placed a tube-like device inside these shell coverings, which housed a range of hardware, including a pump to change the glider’s buoyancy, a mass shifter (a device that controls the machine’s angle-of-attack), and electronic components.
Each design outperformed a handmade torpedo-shaped glider by moving more efficiently across a pool. With higher lift-to-drag ratios than their counterpart, both AI-driven machines exerted less energy, similar to the effortless ways marine animals navigate the oceans.
As much as the project is an encouraging step forward for glider design, the researchers are looking to narrow the gap between simulation and real-world performance. They are also hoping to develop machines that can react to sudden changes in currents, making the gliders more adaptable to seas and oceans.
Chen adds that the team is looking to explore new types of shapes, particularly thinner glider designs. They intend to make their framework faster, perhaps bolstering it with new features that enable more customization, maneuverability, or even the creation of miniature vehicles.
Chen and Hagemann co-led research on this project with OpenAI researcher Pingchuan Ma SM ’23, PhD ’25. They authored the paper with Wei Wang, a University of Wisconsin at Madison assistant professor and recent CSAIL postdoc; John Romanishin ’12, SM ’18, PhD ’23; and two MIT professors and CSAIL members: lab director Daniela Rus and senior author Wojciech Matusik. Their work was supported, in part, by a Defense Advanced Research Projects Agency (DARPA) grant and the MIT-GIST Program.
© Image courtesy of the researchers.
Collaborating with the force of nature
Common sense tells us to run from molten lava flowing from active volcanoes. But MIT professors J. Jih, Cristina Parreño Alonso, and Skylar Tibbits — faculty in the Department of Architecture at the School of Architecture and Planning — have their bags packed to head to southwest Iceland in anticipation of an imminent volcanic eruption. The Nordic island nation is currently experiencing a period of intense seismic activity; seven volcanic eruptions have taken place in its southern peninsula in under a year.
Earlier this year, the faculty built and placed a series of lightweight, easily deployable steel structures close to the volcano, where a few of the recent eruptions have taken place; several more structures are on trucks waiting to be delivered to sites where fissures open and lava oozes out. Cameras are in place to record what happens when the lava meets and hits these structures to help understand the lava flows.
This new research explores what type of shapes and materials can be used to interact with lava and successfully divert it from heading in the direction of habitats or critical infrastructure that lie in its path. Their work is supported by a Professor Amar. G. Bose Research Grant.
“We’re trying to imagine new ways of conceptualizing infrastructure when it relates to lava and volcanic eruptions,” says Jih, an associate professor of the practice. “Lovely for us as designers, physical prototyping is the only way you can test some of these ideas out.”
Currently, the Icelandic Department of Civic Protection and Emergency Management and an engineering group, EFLA, are diverting the lava with massive berms (approximately 44 to 54 yards in length and 9 yards in height) made from earth and stone.
Berms protecting the town of Grindavik, a power plant, and the popular Blue Lagoon geothermal spa have met with mixed results. In November 2024, a volcano erupted for the seventh time in less than a year, forcing the evacuation of town residents and the Blue Lagoon’s guests and employees. The latter’s parking lot was consumed by lava.
Sigurdur Thorsteinsson, chief brand, design, and innovation officer of the Blue Lagoon, as well as a designer and a partner in Design Group Italia, was on site for this eruption and several others.
“Some magma went into the city of Grindavik and three or four houses were destroyed,” says Thorsteinsson. “One of our employees watched her house go under magma on television, which was an emotional moment.”
While staff at the Blue Lagoon have become very efficient at evacuating guests, says Thorsteinsson, each eruption forces the tourist destination to close and townspeople to evacuate, disrupting lives and livelihoods.
“You cannot really stop the magma,” says Thorsteinsson, who is working with the MIT faculty on this research project. “It’s too powerful.”
Tibbits, associate professor of design research and founder and co-director of the Self-Assembly Lab, agrees. His research explores how to guide or work with the forces of nature.
Last year, Tibbits and Jih were in Iceland on another research project when erupting volcanoes interrupted their work. The two started thinking about how the lava could be redirected.
“The question is: Can we find more strategic interventions in the field that could work with the lava, rather than fight it?” says Tibbits.
To investigate what kinds of materials would withstand this type of interaction, they invited Parreño Alonso, a senior lecturer in the Department of Architecture, to join them.
“Cristina, being the department authority on magma, was an obvious and important partner for us,” says Jih with a smile.
Parreño Alonso has been working with volcanic rock for years and taught a series of design studios exploring volcanic rock as an architectural material. She also has proposed designing structures to engage directly with lava flows and recently has been examining volcanic rock in a molten state and melting basalt in MIT’s foundry with Michael Tarkanian, a senior lecturer in MIT’s Department of Materials Science and Engineering, and Metals Lab director. For this project, she is exploring the potential of molten rock as a substitute for concrete, a widely used material because of its pliability.
“It’s exciting how this idea of working with volcanoes was taking shape in parallel, from different angles, within the same department,” says Parreño Alonso. “I love how these parallel interests have led to such a beautiful collaboration.”
She also sees other opportunities by collaborating with these forces of nature.
“We are interested in the potential of generating something out of the interaction with the lava,” she says. “Could it be a landscape that becomes a park? There are many possibilities.”
The steel structures were first tested at MIT’s Metals Lab with Tarkanian and then built onsite in Iceland. The team wanted to make the structures lightweight so they could be quickly set up in the field, but strong enough so they wouldn’t be easily destroyed. Various designs were created; this iteration of the design has V-shaped structures that can guide the lava to flow around them, or they can be reconfigured as ramps or tunnels.
“There is a road that has been hit by many of the recent eruptions and must keep being rebuilt,” says Tibbits. “We created two ramps that could in the future serve as tunnels, allowing the lava to flow over the road and create a type of lava cave where the cars could drive under the cooled lava.”
Tibbits says they see the structures in the field now as an initial intervention. After documenting and studying how they interact with the lava, the architects will develop new iterations of what they believe will eventually become critical infrastructure for locations around the world with active volcanoes.
“If we can show and prove what kinds of shapes and structures and what kinds of materials can divert magma flows, I think it’s incredibly valuable research,” says Thorsteinsson.
Thorsteinsson lives in Italy half of the year and says the volcanoes there — Mount Etna in Sicily and Mount Vesuvius in the Gulf of Naples — pose a greater danger than those in Iceland because of the densely populated neighborhoods nearby. Volcanoes in Hawaii and Japan are in similarly populated areas.
“Whatever information you can learn about diverting magma flows to other directions and what kinds of structures are needed — it would be priceless,” he says.
© Photo: Marino Thorlacious
Long in the tooth
Long in the tooth

Kevin Uno (left) and Daniel Green look at fossil samples in the lab.
Photo by Grace DuVal
Clea Simon
Harvard Correspondent
Research finds 18-million-year-old enamel proteins in mammal fossils, offering window into how prehistoric animals lived, evolved
Proteins degrade over time, making their history hard to study. But new research has uncovered ancient proteins in the enamel of the teeth of 18-million-year-old fossilized mammals from Kenya’s Rift Valley, opening a window into how these animals lived and evolved.
In their new paper in Nature, researchers from Harvard and the Smithsonian Museum Conservation Institute discuss their findings.
“Teeth are rocks in our mouths,” explained Daniel Green, field program director in the Department of Human Evolutionary Biology and the paper’s lead author. “They’re the hardest structures that any animals make, so you can find a tooth that is a hundred or a hundred million years old, and it will contain a geochemical record of the life of the animal.”
That includes what the animal ate and drank, as well as its environment.

Green examining fossils from a northern Kenyan site called Napudet.
Photo by Fred Horne

A fossil sample.
Photo by Grace DuVal
“In the past, we thought that mature enamel, the hardest part of teeth, should really have very few proteins in it at all,” said Green. However, utilizing a newer proteomics technique called liquid chromatography tandem mass spectrometry, the team was able to detect “a great diversity of proteins … in different biological tissues.”
“The technique involves several stages where peptides are separated based on their size or chemistry so that they can be sequentially analyzed at higher resolutions than was possible with previous methods,” explained Kevin T. Uno, associate professor in HEB and one of the paper’s corresponding authors.
“We and other scholars recently found that there are dozens — if not even hundreds — of different kinds of proteins present inside tooth enamel,” said Green.
With the realization that many proteins are found in contemporary teeth, the researchers turned to fossils, collaborating with the Smithsonian and the National Museum of Kenya for access to fossilized teeth, particularly those of early elephants and rhinos.
As herbivores, they had large teeth to grind the plants that made up their diets. These mammals, Green said, “can have enamel two to three millimeters thick. It was a lot of material to work with.”
What they found — peptide fragments, chains of amino acids, that together form proteins as old as 18 million years — was “field-changing,” according to Green.
“Nobody’s ever found peptide fragments that are this old before,” he said, calling the findings “kind of shocking.”
Until now, the oldest prior findings were put at about 3.5 million years old, he said.
“With the help of our colleague Tim Cleland, a superb paleoproteomicist at the Smithsonian, we’re pushing back the age of peptide fragments by five or six times what was known before.”

Formed approximately 16 million years ago, the Buluk site in Kenya is found in one of the most remote and inhospitable places in the rift, but has yielded an extraordinary diversity of fossil fauna.
Photo by Ellen Miller
The newly discovered peptides cover a range of proteins that perform different functions, altogether known as the proteome, Green said.
“One of the reasons that we’re excited about these ancient teeth is that we don’t have the full proteome of all proteins that could have been found inside the bodies of these ancient elephants or rhinoceros, but we do have a group of them.”
With such a collection, “There might be more information available from a group of them than just one protein by itself.”
This research “opens new frontiers in paleobiology, allowing scientists to go beyond bones and morphology to reconstruct the molecular and physiological traits of extinct animals and hominins,” said Emmanuel K. Ndiema, senior research scientist at the National Museum of Kenya and paper co-author. “This provides direct evidence of evolutionary relationships. Combined with other characteristics of teeth, we can infer dietary adaptations, disease profiles, and even age at death — insights that were previously inaccessible.”
In addition to shedding light on the lives of these creatures, it helps place them in history.
“We can use these peptide fragments to explore the relationships between ancient animals, similar to how modern DNA in humans is used to identify how people are related to one another,” Uno said.
“Even if an animal is completely extinct — and we have some animals that we analyze in our study who have no living descendants — you can still, in theory, extract proteins from their teeth and try to place them on a phylogenetic tree,” said Green.
Such information “might be able to resolve longstanding debates between paleontologists about what other mammalian lineages these animals are related to using molecular evidence.”
Although this research began as “a small side project” of a much larger project involving dozens of institutions and researchers from around the world, said Green, “We were surprised at just how much we found. There really are a lot of proteins preserved in these teeth.”
This research was partially funded by the National Science Foundation and Smithsonian’s Museum Conservation Institute.
A language model built for the public good
Major autism study uncovers biologically distinct subtypes, paving the way for precision diagnosis and care
British - French research partnership on AI

During the French President's state visit to the United Kingdom, Institut Polytechnique de Paris (IP Paris), HEC Paris, Université Paris-Saclay, Oxford University and Cambridge University formalised a joint commitment to create a strategic partnership in the field of artificial intelligence.
Named the Entente CordIAle Paris-Saclay – Oxford-Cambridge AI Initiative, this partnership brings together two leading centres of scientific and technological excellence: the Saclay Cluster and the Universities of Oxford and Cambridge. They share a common ambition - to foster the emergence of excellent, ethical and sovereign artificial intelligence on a European scale.
The aim of the partnership is to structure long-term cooperation in AI research, training and innovation, in order to meet the major challenges of our time. It is organised around five key areas:
- Encouraging academic mobility between students, doctoral students, researchers and teachers to enhance expertise and training.
- Organising joint scientific events (seminars, workshops, symposia) on the major scientific and ethical challenges of AI.
- Launching collaborative research projects: co-direction of theses, interdisciplinary programmes, joint applications for funding.
- Involving industrial and innovation players, to accelerate technology transfer and support AI entrepreneurship.
- Strengthen bilateral cooperation, in line with national and European strategic priorities.
The 'Entente CordIAle Paris-Saclay – Oxford-Cambridge AI Initiative' extends the shared vision of Institut Polytechnique de Paris and HEC Paris: to establish a leading European hub in artificial intelligence, at the intersection of cutting-edge research, innovation, and the major challenges of our time.
A firmly solution-oriented ambition realized through Hi! PARIS, a key actor in the France 2030 strategy, integrating cutting-edge research, excellence in education, and concrete technological innovations to enhance European competitiveness. This interdisciplinary centre was co-founded by IP Paris and HEC Paris in 2020, joined by Inria in 2021, and benefits from €70 million in funding over five years.
In a joint statement, Thierry Coulhon, President of Institut Polytechnique de Paris and Eloïc Peyrache, Dean of HEC Paris, said:
"With the Entente CordIAle Paris-Saclay – Oxford-Cambridge AI Initiative, we are taking a decisive step forward in European scientific and academic cooperation. By bringing together the excellence of our institutions, through the interdisciplinary centre Hi! PARIS, with that of Oxford and Cambridge, we are laying the foundation for an unparalleled axis of research and innovation in artificial intelligence."
Professor Deborah Prentice, Vice-Chancellor of the University of Cambridge, concurred:
"The University of Cambridge is proud to be part of this collaboration, which reflects our deep commitment to shaping the future of AI through rigorous research, inclusive education, and responsible innovation. Combining our strengths and sharing knowledge will help us to address the most pressing challenges of our time and ensure AI serves the common good."
The Saclay Cluster, which includes Institut Polytechnique de Paris, HEC Paris and Université Paris-Saclay, the University of Oxford and the University of Cambridge, are joining forces to build AI excellence.
The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.
Implantable device could save diabetes patients from dangerously low blood sugar
For people with Type 1 diabetes, developing hypoglycemia, or low blood sugar, is an ever-present threat. When glucose levels become extremely low, it creates a life-threatening situation for which the standard treatment of care is injecting a hormone called glucagon.
As an emergency backup, for cases where patients may not realize that their blood sugar is dropping to dangerous levels, MIT engineers have designed an implantable reservoir that can remain under the skin and be triggered to release glucagon when blood sugar levels get too low.
This approach could also help in cases where hypoglycemia occurs during sleep, or for diabetic children who are unable to administer injections on their own.
“This is a small, emergency-event device that can be placed under the skin, where it is ready to act if the patient’s blood sugar drops too low,” says Daniel Anderson, a professor in MIT’s Department of Chemical Engineering, a member of MIT’s Koch Institute for Integrative Cancer Research and Institute for Medical Engineering and Science (IMES), and the senior author of the study. “Our goal was to build a device that is always ready to protect patients from low blood sugar. We think this can also help relieve the fear of hypoglycemia that many patients, and their parents, suffer from.”
The researchers showed that this device could also be used to deliver emergency doses of epinephrine, a drug that is used to treat heart attacks and can also prevent severe allergic reactions, including anaphylactic shock.
Siddharth Krishnan, a former MIT research scientist who is now an assistant professor of electrical engineering at Stanford University, is the lead author of the study, which appears today in Nature Biomedical Engineering.
Emergency response
Most patients with type 1 diabetes use daily insulin injections to help their body absorb sugar and prevent their blood sugar levels from getting too high. However, if their blood sugar levels get too low, they develop hypoglycemia, which can lead to confusion and seizures, and may be fatal if it goes untreated.
To combat hypoglycemia, some patients carry preloaded syringes of glucagon, a hormone that stimulates the liver to release glucose into the bloodstream. However, it isn’t always easy for people, especially children, to know when they are becoming hypoglycemic.
“Some patients can sense when they’re getting low blood sugar, and go eat something or give themselves glucagon,” Anderson says. “But some are unaware that they’re hypoglycemic, and they can just slip into confusion and coma. This is also a problem when patients sleep, as they are reliant on glucose sensor alarms to wake them when sugar drops dangerously low.”
To make it easier to counteract hypoglycemia, the MIT team set out to design an emergency device that could be triggered either by the person using it, or automatically by a sensor.
The device, which is about the size of a quarter, contains a small drug reservoir made of a 3D-printed polymer. The reservoir is sealed with a special material known as a shape-memory alloy, which can be programmed to change its shape when heated. In this case, the researcher used a nickel-titanium alloy that is programmed to curl from a flat slab into a U-shape when heated to 40 degrees Celsius.
Like many other protein or peptide drugs, glucagon tends to break down quickly, so the liquid form can’t be stored long-term in the body. Instead, the MIT team created a powdered version of the drug, which remains stable for much longer and stays in the reservoir until released.
Each device can carry either one or four doses of glucagon, and it also includes an antenna tuned to respond to a specific frequency in the radiofrequency range. That allows it to be remotely triggered to turn on a small electrical current, which is used to heat the shape-memory alloy. When the temperature reaches the 40-degree threshold, the slab bends into a U shape, releasing the contents of the reservoir.
Because the device can receive wireless signals, it could also be designed so that drug release is triggered by a glucose monitor when the wearer’s blood sugar drops below a certain level.
“One of the key features of this type of digital drug delivery system is that you can have it talk to sensors,” Krishnan says. “In this case, the continuous glucose-monitoring technology that a lot of patients use is something that would be easy for these types of devices to interface with.”
Reversing hypoglycemia
After implanting the device in diabetic mice, the researchers used it to trigger glucagon release as the animals’ blood sugar levels were dropping. Within less than 10 minutes of activating the drug release, blood sugar levels began to level off, allowing them to remain within the normal range and avert hypoglycemia.
The researchers also tested the device with a powdered version of epinephrine. They found that within 10 minutes of drug release, epinephrine levels in the bloodstream became elevated and heart rate increased.
In this study, the researchers kept the devices implanted for up to four weeks, but they now plan to see if they can extend that time up to at least a year.
“The idea is you would have enough doses that can provide this therapeutic rescue event over a significant period of time. We don’t know exactly what that is — maybe a year, maybe a few years, and we’re currently working on establishing what the optimal lifetime is. But then after that, it would need to be replaced,” Krishnan says.
Typically, when a medical device is implanted in the body, scar tissue develops around the device, which can interfere with its function. However, in this study, the researchers showed that even after fibrotic tissue formed around the implant, they were able to successfully trigger the drug release.
The researchers are now planning for additional animal studies and hope to begin testing the device in clinical trials within the next three years.
“It’s really exciting to see our team accomplish this, which I hope will someday help diabetic patients and could more broadly provide a new paradigm for delivering any emergency medicine,” says Robert Langer, the David H. Koch Institute Professor at MIT and an author of the paper.
Other authors of the paper include Laura O’Keeffe, Arnab Rudra, Derin Gumustop, Nima Khatib, Claudia Liu, Jiawei Yang, Athena Wang, Matthew Bochenek, Yen-Chun Lu, Suman Bose, and Kaelan Reed.
The research was funded by the Leona M. and Harry B. Helmsley Charitable Trust, the National Institutes of Health, a JDRF postdoctoral fellowship, and the National Institute of Biomedical Imaging and Bioengineering.
© Image: Courtesy of the researchers
Processing our technological angst through humor
The first time Steve Jobs held a public demo of the Apple Macintosh, in early 1984, scripted jokes were part of the rollout. First, Jobs pulled the machine out of a bag. Then, using speech technology from Samsung, the Macintosh made a quip about rival IBM’s mainframes: “Never trust a computer you can’t lift.”
There’s a reason Jobs was doing that. For the first few decades that computing became part of cultural life, starting in the 1950s, computers seemed unfriendly, grim, and liable to work against human interests. Take the 1968 film “2001: A Space Odyssey,” in which the onboard computer, HAL, turns against the expedition’s astronauts. It’s a famous cultural touchstone. Jobs, in selling the idea of a personal computer, was using humor to ease concerns about the machines.
“Against the sense of computing as cold and numbers-driven, the fact that this computer was using voice technology to deliver jokes made it seem less forbidding, less evil,” says MIT scholar Benjamin Mangrum.
In fact, this dynamic turns up throughout modern culture, in movies, television, fiction, and the theater. We often deal with our doubts and fears about computing through humor, whether reconciling ourselves to machines or critiquing them. Now, Mangrum analyzes this phenomenon in a new book, “The Comedy of Computation: Or, How I Learned to Stop Worrying and Love Obsolescence,” published this month by Stanford University Press.
“Comedy has been a form for making this technology seem ordinary,” says Mangrum, an associate professor in MIT’s literature program. “Where in other circumstances computing might seem inhuman or impersonal, comedy allows us to incorporate it into our lives in a way that makes it make sense.”
Reversals of fortune
Mangrum’s interest in the subject was sparked partly by William Marchant’s 1955 play, “The Desk Set” — a romantic comedy later turned into a film starring Katharine Hepburn and Spencer Tracy — which queries, among other things, how office workers will co-exist alongside computers.
Perhaps against expectations, romantic comedies have turned out to be one of the most prominent contemporary forms of culture that grapple with technology and its effects on us. Mangrum, in the book, explains why: Their plot structure often involves reversals, which sometimes are extended to technology, too. Computing might seem forbidding, but it might also pull people together.
“One of the common tropes about romantic comedies is that there are characters or factors in the drama that obstruct the happy union of two people,” Mangrum observes. “And often across the arc of the drama, the obstruction or obstructive character is transformed into a partner, or collaborator, and assimilated within the happy couple’s union. That provides a template for how some cultural producers want to present the experience of computing. It begins as an obstruction and ends as a partner.”
That plot structure, Mangrum notes, dates to antiquity and was common in Shakespeare’s day. Still, as he writes in the book, there is “no timeless reality called Comedy,” as the vehicles and forms of it change over time. Beyond that, specific jokes about computing can quickly become outmoded. Steve Jobs made fun of mainframes, and the 1998 Nora Ephron comedy “You’ve Got Mail” got laughs out of dial-up modems, but those jokes might leave most people puzzled today.
“Comedy is not a fixed resource,” Mangrum says. “It’s an ever-changing toolbox.”
Continuing this evolution into the 21st century, Mangrum observes that a lot of computational comedy centers on an entire category of commentary he calls “the Great Tech-Industrial Joke.” This focuses on the gap between noble-sounding declared aspirations of technology and the sometimes-dismal outcomes it creates.
Social media, for instance, promised new worlds of connectivity and social exploration, and has benefits people enjoy — but it has also generated polarization, misinformation, and toxicity. Technology’s social effects are complex. Whole televisions shows, such as “Silicon Valley,” have dug into this terrain.
“The tech industry announces that some of its products have revolutionary or utopian aims, but the achievements of many of them fall far short of that,” Mangrum says. “It’s a funny setup for a joke. People have been claiming we’re saving the world, when actually we’re just processing emails faster. But it’s a mode of criticism aimed at big tech, since its products are more complicated.”
A complicated, messy picture
“The Comedy of Computation” digs into several other facets of modern culture and technology. The notion of personal authenticity, as Mangrum observes, is a fairly recent and modern construct in society — and it’s another sphere of life that collides with computing, since social media is full of charges of inauthenticity.
“That ethics of authenticity connects to comedy, as we make jokes about people not being authentic,” Mangrum says.
“The Comedy of Computation” has received praise from other scholars. Mark Goble, a professor of English at the University of California at Berkeley, has called it “essential for understanding the technological world in its complexity, absurdity, and vibrancy.”
For his part, Mangrum emphasizes that his book is an exploration of the full complexity of technology, culture, and society.
“There’s this really complicated, messy picture,” Mangrum says. “And comedy sometimes finds a way of experiencing and finding pleasure in that messiness, and other times it neatly wraps it up in a lesson that can make things neater than they actually are.”
Mangrum adds that the book focuses on “the combination of the threat and pleasure that’s involved across the history of the computer, in the ways it’s been assimilated and shaped society, with real advances and benefits, along with real threats, for instance to employment. I’m interested in the duality, the simultaneous and seemingly conflicting features of that experience.”
© Credit: Courtesy of Stanford University Press; Allegra Boverman
MIT Open Learning bootcamp supports effort to bring invention for long-term fentanyl recovery to market
Evan Kharasch, professor of anesthesiology and vice chair for innovation at Duke University, has developed two approaches that may aid in fentanyl addiction recovery. After attending MIT’s Substance Use Disorders (SUD) Ventures Bootcamp, he’s committed to bringing them to market.
Illicit fentanyl addiction is still a national emergency in the United States, fueled by years of opioid misuse. As opioid prescriptions fell by 50 percent over 15 years, many turned to street drugs. Among those drugs, fentanyl stands out for its potency — just 2 milligrams can be fatal — and its low production cost. Often mixed with other drugs, it contributed to a large portion of over 80,000 overdose deaths in 2024. It has been particularly challenging to treat with currently available medications for opioid use disorder.
As an anesthesiologist, Kharasch is highly experienced with opioids, including methadone, one of only three drugs approved in the United States for treating opioid use disorder. Methadone is a key option for managing fentanyl use. It’s employed to transition patients off fentanyl and to support ongoing maintenance, but access is limited, with only 20 percent of eligible patients receiving it. Initiating and adjusting methadone treatment can take weeks due to its clinical characteristics, often causing withdrawal and requiring longer hospital stays. Maintenance demands daily visits to one of just over 2,000 clinics, disrupting work or study and leading most patients to drop out after a few months.
To tackle these challenges, Kharasch developed two novel methadone formulations: one for faster absorption to cut initiation time from weeks to days — or even hours — and one to slow elimination, thereby potentially requiring only weekly, rather than daily, dosing. As a clinician, scientist, and entrepreneur, he sees the science as demanding, but bringing these treatments to patients presents an even greater challenge. Kharasch learned about the SUD Ventures Bootcamp, part of MIT Open Learning, as a recipient of research funding from the National Institute on Drug Abuse (NIDA). He decided to apply to bridge the gap in his expertise and was selected to attend as a fellow.
Each year, the SUD Ventures Bootcamp unites innovators — including scientists, entrepreneurs, and medical professionals — to develop bold, cross-disciplinary solutions to substance use disorders. Through online learning and an intensive one-week in-person bootcamp, teams tackle challenges in different “high priority” areas. Guided by experts in science, entrepreneurship, and policy, they build and pitch ventures aimed at real-world impact. Beyond the multidisciplinary curriculum, the program connects people deeply committed to this space and equipped to drive progress.
Throughout the program, Kharasch’s concepts were validated by the invited industry experts, who highlighted the potential impact of a longer-acting methadone formulation, particularly in correctional settings. Encouragement from MIT professors, coaches, and peers energized Kharasch to fully pursue commercialization. He has already begun securing intellectual property rights, validating the regulatory pathway through the U.S Food and Drug Administration, and gathering market and patient feedback.
The SUD Ventures Bootcamp, he says, both activated and validated his passion for bringing these innovations to patients. “After many years of basic, translational and clinical research on methadone all — supported by NIDA — I experienced that a ha moment of recognizing a potential opportunity to apply the findings to benefit patients at scale,” Kharasch says. “The NIDA-sponsored participation in the MIT SUD Ventures Bootcamp was the critical catalyst which ignited the inspiration and commitment to pursue commercializing our research findings into better treatments for opioid use disorder.”
As next steps, Kharasch is seeking an experienced co-founder and finalizing IP protections. He remains engaged with the SUD Ventures network as mentors, industry experts, and peers offer help with advancing this needed solution to market. For example, the program's mentor, Nat Sims, the Newbower/Eitan Endowed Chair in Biomedical Technology Innovation at Massachusetts General Hospital (MGH) and a fellow anesthesiologist, has helped Kharasch arrange technology validation conversations within the MGH ecosystem and the drug development community.
“Evan’s collaboration with the MGH ecosystem can help define an optimum process for commercializing these innovations — identifying who would benefit, how they would benefit, and who is willing to pilot the product once it’s available,” says Sims.
Kharasch has also presented his project in the program’s webinar series. Looking ahead, Kharasch hopes to involve MIT Sloan School of Management students in advancing his project through health care entrepreneurship classes, continuing the momentum that began with the SUD Ventures Bootcamp.
The program and its research are supported by the NIDA of the National Institutes of Health. Cynthia Breazeal, a professor of media arts and sciences at the MIT Media Lab and dean for digital learning at MIT Open Learning, serves as the principal investigator on the grant.
© Photo: Chris McIntosh
Gut microbes could protect us from toxic ‘forever chemicals’

PFAS have been linked with a range of health issues including decreased fertility, developmental delays in children, and a higher risk of certain cancers and cardiovascular diseases.
Scientists at the University of Cambridge have identified a family of bacterial species, found naturally in the human gut, that absorb various PFAS molecules from their surroundings. When nine of these bacterial species were introduced into the guts of mice to ‘humanise’ the mouse microbiome, the bacteria rapidly accumulated PFAS eaten by the mice - which were then excreted in faeces.
The researchers also found that as the mice were exposed to increasing levels of PFAS, the microbes worked harder, consistently removing the same percentage of the toxic chemicals. Within minutes of exposure, the bacterial species tested soaked up between 25% and 74% of the PFAS.
The results are the first evidence that our gut microbiome could play a helpful role in removing toxic PFAS chemicals from our body - although this has not yet been directly tested in humans.
The researchers plan to use their discovery to create probiotic dietary supplements that boost the levels of these helpful microbes in our gut, to protect against the toxic effects of PFAS.
The results are published in the journal Nature Microbiology.
PFAS (Perfluoroalkyl and Polyfluoroalkyl Substances) can’t be avoided in our modern world. These man-made chemicals are in many everyday items including waterproof clothing, non-stick pans, lipsticks and food packaging, used for their resistance to heat, water, oil and grease. But because they take thousands of years to break down, they are accumulating in large quantities in the environment – and in our bodies.
Dr Kiran Patil, in the University of Cambridge’s MRC Toxicology Unit and senior author of the report, said: “Given the scale of the problem of PFAS ‘forever chemicals’, particularly their effects on human health, it’s concerning that so little is being done about removing these from our bodies.”
“We found that certain species of human gut bacteria have a remarkably high capacity to soak up PFAS from their environment at a range of concentrations, and store these in clumps inside their cells. Due to aggregation of PFAS in these clumps, the bacteria themselves seem protected from the toxic effects.”
Dr Indra Roux, a researcher at the University of Cambridge’s MRC Toxicology Unit and a co-author of the study said: “The reality is that PFAS are already in the environment and in our bodies, and we need to try and mitigate their impact on our health now. We haven’t found a way to destroy PFAS, but our findings open the possibility of developing ways to get them out of our bodies where they do the most harm.”
There is increasing concern about the environmental and health impacts of PFAS, and in April 2025 the UK launched a parliamentary inquiry into their risks and regulation.
There are over 4,700 PFAS chemicals in widespread use. Some get cleared out of the body in our urine in a matter of days, but others with a longer molecular structure can hang around in the body for years.
Dr Anna Lindell, a researcher at the University of Cambridge’s MRC Toxicology Unit and first author of the study said: “We’re all being exposed to PFAS through our water and food – these chemicals are so widespread that they’re in all of us.
“PFAS were once considered safe, but it’s now clear that they’re not. It’s taken a long time for PFAS to become noticed because at low levels they’re not acutely toxic. But they’re like a slow poison.”
Lindell and Patil have co-founded a startup, Cambiotics, with serial entrepreneur Peter Holme Jensen to develop probiotics that remove PFAS from the body, and they are investigating various ways of turbo-charging the microbes’ performance. Cambiotics is supported by Cambridge Enterprise, the innovation arm of the University of Cambridge, which helps researchers translate their work into globally-leading economic and social impact.
While we wait for new probiotics to become available, the researchers say the best things we can do to help protect ourselves against PFAS are to avoid PFAS-coated cooking pans, and use a good water filter.
The research was funded primarily by the Medical Research Council, National Institute for Health Research, and Wellcome.
Reference
Lindell, AE: ‘Human gut bacteria bioaccumulate per- and polyfluoroalkyl substances.’ Nature Microbiology, July 2025. DOI: 10.1038/s41564-025-02032-5
Scientists have discovered that certain species of microbe found in the human gut can absorb PFAS - the toxic and long-lasting ‘forever chemicals.’ They say boosting these species in our gut microbiome could help protect us from the harmful effects of PFAS.
The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.
Celebrating sporting success at the 2025 Cambridge University Sports Awards

Organised by the University Sports service, the annual ceremony brought together students, staff, alumni, and guests to recognise the exceptional contributions and successes of sports clubs, teams, and individuals across the University.
Hosted by Director of Sport Mark Brian, the awards were presented by a distinguished line-up of guests including Professor Bhaskar Vira (Pro-Vice-Chancellor for Education and Chair of the Sports Committee), Deborah Griffin (incoming RFU President), Scott Annett (CURUFC Director of Rugby), and Senior Tutors and Committee Members Victoria Harvey and Dr Jane Greatorex. Former Sports Personality of the Year Jack Murphy returned to present one of the evening’s headline awards.
The awards shine a light on the importance of sport as part of the Cambridge experience - enhancing student wellbeing, building community, and nurturing excellence both on and off the field. The winners were selected by a panel of senior University staff, with the exception of the Sporting Moment of the Year, which was decided by public vote.
This year’s winners:
Club of the Year: Association Football Club
Team of the Year: Women’s Cross Country A Team, Hare & Hounds
Sports Person of the Year: Jan Helmich (Trinity Hall), Rowing
Unsung Hero: Emma Paterson (Gonville and Caius), Mixed Lacrosse
Sports Club Personality of the Year: Tads Ciecieski-Holmes (Wolfson), Modern Pentathlon
Sporting Moment of the Year: Men’s Volleyball Blues Varsity Set Point
Newcomer of the Year: Lauren Airey (Emmanuel), Modern Pentathlon
College Team of the Year: Downing Table Tennis
Outstanding Contribution Awards were presented to:
- Lucy Xu (Pembroke), Taekwondo
- Sam Grimshaw (Girton), Hockey
- Georgina Quayle (Homerton), Modern Pentathlon and Swimming & Water Polo
- Ben Rhodes (Jesus), Touch Rugby
- Izzy Howse (Robinson), Netball
- Ksenija Belada (Peterhouse), Volleyball
- Izzy Winter and Jess Reeve, Clarissa’s Campaign for Cambridge Hearts
A particularly moving moment came during the presentation of an Outstanding Contribution Award to Clarissa’s Campaign for Cambridge Hearts, recognising efforts by Izzy Winter and Jess Reeve to raise funds and awareness for student heart screenings. For more information on the October 2025 screenings, visit www.sport.cam.ac.uk/heart-screening.
The University extends its congratulations to all nominees and winners, and its thanks to everyone who participated in and supported the 2025 Sports Awards. The event was a testament to the passion, resilience, and camaraderie that sport brings to the Cambridge community.
To read more about all the nominees, please visit the Sports Awards page: https://www.sport.cam.ac.uk/sportsawards/sports-awards-2025
Story by: Will Galpin
Crowds cheer on the nominees and winners at the 2025 Sports Awards.
The University of Cambridge recently celebrated a remarkable year of student sporting achievement at the 2025 Cambridge University Sports Awards.
The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.
UK Ambassador to the US visits Cambridge to discuss opportunities for deepening UK-US tech collaboration

As the world’s most intensive science and technology cluster, Cambridge is driving breakthrough research and attracting global investment across quantum, life sciences, and biotech.
During his visit, hosted by Founders at University of Cambridge and Innovate Cambridge, the Ambassador heard about the University’s success in securing funding for these critical areas and its bold plans to fuel national economic growth—most notably through the National Innovation Hub and the West Cambridge Innovation District, set to become Europe’s leading centre for AI, quantum, and climate research.
At the heart of the visit was a tour of the new Ray Dolby Centre, home to the historic Cavendish Laboratory. Hosted by Professor Mete Atatüre, Head of the Department of Physics, Lord Mandelson learned about Cambridge’s leadership in quantum technologies and the rapidly growing portfolio of real-world applications emerging from this research.
Vice-Chancellor Professor Deborah Prentice then hosted a roundtable lunch at Cambridge Enterprise, the University’s commercialisation arm, where leaders from high-growth companies in quantum, AI, and life sciences joined to discuss opportunities for deepening UK-US tech collaboration.
The visit follows the recent signing of the UK-US trade agreement, which lays the groundwork for a future technology partnership between the two countries. As both nations turn to innovation as a key driver of economic growth and global problem-solving, Cambridge stands ready to play a pivotal role.
Recent Dealroom research for Founders at the University of Cambridge highlights Cambridge’s momentum: the area now attracts more venture capital investment in deep tech per capita than anywhere else globally. The region’s tech ecosystem is valued at $222 billion—18% of the UK’s total tech value, second only to London.
Prof Deborah Prentice said: "It was a pleasure to join the Ambassador and colleagues to showcase the full depth and breadth of Cambridge’s research and business strengths - from personalised vaccines and genomics to qubits and semiconductors. Cambridge has unique capabilities to help drive the UK-US tech partnership forward, and we’re excited to build on this momentum."
This week, UK Ambassador to the United States of America Lord Mandelson visited the University of Cambridge to explore its world-leading strengths in innovation and its deepening academic and industrial partnerships with the USA.
The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.
Patient with debilitating inherited condition receives new approved treatment on the NHS in Europe first

Mary Catchpole, 19, was given a newly licensed drug called leniolisib (or Joenja) at Addenbrooke’s Hospital in Cambridge. It is the first ever targeted treatment for a rare, inherited immunodeficiency called Activated PI3-Kinase delta syndrome (APDS).
People with APDS have a weakened immune system, making them vulnerable to repeated infections and autoimmune or inflammatory conditions. Discovered just over a decade ago by a team of Cambridge researchers, it is a debilitating and life-threatening condition, with patients more likely to develop blood cancers like lymphoma.
APDS is a relatively new immuno-deficiency, with Mary’s family playing a key role in its discovery in 2013. Mary’s mother and uncle, who were Addenbrooke’s patients, were offered DNA sequencing (whole exome sequencing) to see if there was a genetic cause for their immuno-deficiency.
Cambridge researchers identified a change in their genes that increased activity of an enzyme called PI3-Kinase delta, resulting in the illness being named Activated PI3-Kinase delta syndrome (APDS).
The team, which involved researchers from the University of Cambridge, Babraham Institute, MRC Laboratory for Molecular Biology, and clinicians from Addenbrooke’s, was primarily funded by Wellcome and the National Institute for Health and Care Research (NIHR).
With APDS, the enzyme PI3-Kinase delta is ‘switched on’ all the time, preventing immune cells from fighting infection and leads to abnormal or dysregulated immune function.
The new treatment – with one tablet taken twice a day – aims to inhibit the enzyme, effectively normalising the immune system.
Dr Anita Chandra, consultant immunologist at Addenbrooke’s and Affiliated Assistant Professor at the University of Cambridge, said: “It is incredible to go from the discovery of a new disease in Cambridge to a treatment being approved and offered on the NHS, within the space of 12 years.
“This new drug will make a huge difference to people living with APDS, hopefully allowing patients to avoid antibiotics, immunoglobulin replacement and potentially even a stem cell transplant in the future.”
Professor Sergey Nejentsev from the University of Cambridge who led the research that discovered APDS said: “As soon as we understood the cause of APDS, we immediately realised that certain drugs could be used to inhibit the enzyme that is activated in these patients. Leniolisib does precisely that. I am delighted that we finally have a treatment which will change the lives of APDS patients.”
The disorder has significantly impacted Mary’s family on her mother’s side. Her aunt died aged 12, while her mother, uncle and grandmother all died in their 30s and 40s.
Mary works as a teaching assistant and lives in Great Yarmouth, Norfolk with her father Jimmy and older brother Joe, who does not have the condition.
Prior to leniolisib, the only treatments available to APDS patients include antibiotics for infections, immunoglobulin replacement therapy (to prevent infections and damage to organs) or a bone marrow or stem cell transplant, which can be a potential cure but carries significant risks.
Mary said: “Having APDS means I’ve got a higher chance of infections and getting unwell, which is hard when all I want to do is work and dance and have adventures. All my life I’ve had to have weekly infusions which make me feel like a pin cushion, and I’ve had to take lots of medication which has been tough.
“Now that I have this new treatment, it does feel bitter-sweet as my late mum and other affected members of my family never got the chance to have this new lease of life, but it is a gift. I feel blessed.”
Leniolisib was licensed for use in the USA in 2023, following clinical trials. After assessment and approval by the UK medicines regulator, the MHRA, it is now approved by NICE (National Institute for Health and Care Excellence) for NHS use - the first health system in Europe to use it to treat patients with APDS.
Professor James Palmer, NHS England’s Medical Director for Specialised Commissioning, said: “We’re delighted to see Mary become the first patient in Europe to receive this first-ever targeted and approved therapy for a rare condition identified just over a decade ago – in Cambridge no less.
“This treatment could be life-changing for those affected by this debilitating genetic disorder, and this important step forward is another example of the NHS’s commitment to offering access to innovative medicines for those living with rare conditions.”
As a tertiary centre for immune-deficiencies, patients eligible for leniolisib can be referred to Addenbrooke’s, part of Cambridge University Hospitals NHS Foundation Trust, for specialist review and care and ongoing research in this rare condition.
Dr Susan Walsh, Chief Executive Officer at Immunodeficiency UK, said: “With leniolisib, we now have a targeted treatment available that addresses the fundamental cause of the immune system problems experienced in APDS. This demonstrates the power of research and is a huge leap forward. The new treatment will help improve the quality of life for those families living with APDS.”
By looking at the role of the enzyme linked to APDS and the impact of the new targeted drug on the patient’s immune system, it is hoped there is potential for leniolisib to be applied to other more common immune related conditions in the future.
Adapted from a press release from Cambridge University Hospitals.
A teenager who has lost family members including her mother because of a rare genetic hereditary illness has become the first patient in the UK and Europe to have a new treatment developed by Cambridge researchers and approved for use on the NHS.
The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.
Co-founder of billion-dollar AI for autonomous driving company and Cambridge alumnus wins Princess Royal Silver Medal

Wayve is one of the UK’s most valuable deep-tech startups, backed by more than $1 billion (about £730 million) in funding. Alex Kendall co-founded the company in 2017 following his PhD at the University of Cambridge, where he pioneered a contrarian approach to self-driving cars.
At a time when the industry relied heavily on rule-based systems, maps and multiple sensors, he proposed a different vision powered by deep learning – where a single neural network could learn to drive from raw data without human intervention.
Wayve’s approach creates a general-purpose driving intelligence that can adapt to new environments. Its models are trained on tens of petabytes of real-world data from its team of safety drivers. Wayve tests its models in both real-world driving settings and in simulation. Real-world testing exposes AI to diverse conditions, while simulation enables efficient, large-scale validation.
Synthetic data on rare or unseen scenarios are used to train their technology to safely navigate the real world. Wayve tests these safety-critical scenarios, such as near collisions or unpredictable pedestrian behaviour, using a cutting-edge generative world model.
Wayve’s autonomous cars have been navigating the complex streets of London since 2019, overseen by legally required safety drivers. Last year they expanded to San Francisco and have also been testing these cars in Stuttgart, and Japan. The company plans to license its technology to car manufacturers, with Nissan set to integrate Wayve’s AI to support driver assistance into its vehicles by 2027.
The engineering team have also built the first language-driving model tested on public roads. LINGO opens up communication with the robot and can narrate its driving and answer questions. That means Wayve’s engineers (and eventually passengers) can communicate with the AI and ask it to explain decisions or drive in a certain way.
He sees autonomous driving as a launchpad for a broader revolution in embodied AI, with applications in robotics, manufacturing, and healthcare. “Bringing AI into the physical world in a way that it can interact with us, is real – is tangible,” explains Kendall, “I think it’s going to be the biggest transformation we go through in our lifetimes.”
Adapted from a Royal Academy of Engineering press release.
Alex Kendall, CEO and Co-Founder of Wayve, a billion-dollar UK company that uses deep learning to solve the challenges of self-driving cars, has been presented with the Princess Royal Silver Medal, one of the Royal Academy of Engineering’s most prestigious individual awards.
The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.
Celebrating Lord Sainsbury of Turville’s ‘selfless’ service as Chancellor

At a reception at the Vice-Chancellor’s Lodge this week, which celebrated his service to the University, Lord Sainsbury talked fondly about his own time as a student at Cambridge, and said: “It has been a great honour and pleasure to be Chancellor of the University of Cambridge, one of the world’s greatest universities.
“Over the years, I have watched with awe how the University has produced an endless stream of brilliant research and an enlightened education for its undergraduates and postgraduates, and I hope that by being Chancellor, and in a number of other ways, I have to some extent repaid my debt to the University. I will always look back at my time as Chancellor with the greatest pleasure.”
The Vice-Chancellor, Professor Deborah Prentice, paid a warm tribute to the Chancellor and thanked him for his service and contribution to the life of the University, and his support for her.
In a recent edition of CAM – the University's alumni magazine – other friends and former colleagues recounted the unique qualities Lord Sainsbury has brought to the post during almost a decade and a half of unwavering commitment.
With high-level experience in government and industry alike, Lord Sainsbury has been a highly effective advocate for the best interests of the University on both the national and global stage. “He’s a man of great ability and thoughtfulness,” says Professor Mike Proctor, 2013-2023 Provost of King’s College, Lord Sainsbury’s alma mater. “He’s very well connected in both the public and private sectors. And that’s been very helpful to the University at large.”
Professor Stephen Toope, the 346th Vice-Chancellor, says that although the role is technically ceremonial, Lord Sainsbury was always willing to go above and beyond. “If I asked him to do something for the University – connect me with the right person, give me a piece of advice – he always did it. He was very generous in making introductions, and saw his role as trying to strengthen the University where he could. And that was largely by supporting the people who’d been asked to do the big jobs – on the Council and in the leadership of Cambridge.”
As a former Minister of Science and Innovation, Lord Sainsbury has brought a wealth of experience to the University. But he has also brought his own love of research and innovation to bear, as Rebecca Simmons, the VC’s former Chief of Staff and now COO of quantum computing company Riverlane, saw first-hand. “He liked to get into the detail beforehand, so he could make good connections with people,” she remembers. “And sometimes, he would come back to see the same people over several years. For example, he stayed in touch with the CEO of Endomag, a cancer diagnostics spinout, and made a point of going back to meet them at key moments. In fact, accompanying him on visits was one of the most fun parts of my job.”
Dr Regina Sachers, former Head of the Vice-Chancellor’s Office and now Director of Governance and Compliance, agrees. “He found it easy to connect with academics because he was genuinely interested in the work. He would always ask very informed questions, and would frequently offer his card and put people in touch with his own connections. It felt like a very genuine and low-key approach.”
The role of Vice-Chancellor can be lonely, says Sir Leszek Borysiewicz, Vice-Chancellor 2010-17: often, the only person you can talk to is the Chancellor. “And Lord Sainsbury always made himself available. He was a friend, a mentor, an adviser. We had differences of opinion, but we could always talk. Having that open debate meant you could road-test the strength of an argument – and, sometimes, backpedal, because he’d made some very valid points that were critical for the University. And I can attest that during my time as Vice-Chancellor, he was always there for the difficult issues. He was quiet and understated, but very thoughtful and very wise – and never interfered with the executive functions that the Vice-Chancellor has to exercise.”
“Lord Sainsbury does not have an agenda of his own: he seeks to do what the University needs, and always has its best interests at heart,” says current Vice-Chancellor Professor Deborah Prentice. “He approaches the job with selflessness and the mentality of a public servant. I like the fact that sometimes he just turns up to things; he’s such a curious and interested person. I think he very much embodies the values of the University.”
Professor Toope says that he has always been struck by Lord Sainsbury’s “complete lack of pomposity. Some people think they are the role. He always understood that the role is the role: he just happened to be occupying it for a period. And he brought a personal and political integrity to it.”
The election for Lord Sainsbury’s successor as Chancellor takes place next month.
After 14 years as Chancellor of the University, Lord Sainsbury of Turville has formally stood down from the role.
The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.
AI art protection tools still leave creators at risk, researchers say

So say a team of researchers who have uncovered significant weaknesses in two of the art protection tools most used by artists to safeguard their work.
According to their creators, Glaze and NightShade were both developed to protect human creatives against the invasive uses of generative artificial intelligence.
The tools are popular with digital artists who want to stop artificial intelligence models (like the AI art generator Stable Diffusion) from copying their unique styles without consent. Together, Glaze and NightShade have been downloaded almost nine million times.
But according to an international group of researchers, these tools have critical weaknesses that mean they cannot reliably stop AI models from training on artists’ work.
The tools add subtle, invisible distortions (known as poisoning perturbations) to digital images. These ‘poisons’ are designed to confuse AI models during training. Glaze takes a passive approach, hindering the AI model’s ability to extract key stylistic features. NightShade goes further, actively corrupting the learning process by causing the AI model to associate an artist’s style with unrelated concepts.
But the researchers have created a method – called LightShed – that can bypass these protections. LightShed can detect, reverse-engineer and remove these distortions, effectively stripping away the poisons and rendering the images usable again for Generative AI model training.
It was developed by researchers at the University of Cambridge along with colleagues at the Technical University Darmstadt and the University of Texas at San Antonio. The researchers hope that by publicising their work – which will be presented at the USENIX Security Symposium, a major security conference, in August – they can let creatives know that there are major issues with art protection tools.
LightShed works through a three-step process. It first identifies whether an image has been altered with known poisoning techniques.
In a second, reverse engineering step, it learns the characteristics of the perturbations using publicly available poisoned examples. Finally, it eliminates the poison to restore the image to its original, unprotected form.
In experimental evaluations, LightShed detected NightShade-protected images with 99.98% accuracy and effectively removed the embedded protections from those images.
“This shows that even when using tools like NightShade, artists are still at risk of their work being used for training AI models without their consent,” said first author Hanna Foerster from Cambridge’s Department of Computer Science and Technology, who conducted the work during an internship at TU Darmstadt.
Although LightShed reveals serious vulnerabilities in art protection tools, the researchers stress that it was developed not as an attack on them – but rather an urgent call to action to produce better, more adaptive ones.
“We see this as a chance to co-evolve defenses,” said co-author Professor Ahmad-Reza Sadeghi from the Technical University of Darmstadt. “Our goal is to collaborate with other scientists in this field and support the artistic community in developing tools that can withstand advanced adversaries.”
The landscape of AI and digital creativity is rapidly evolving. In March this year, OpenAI rolled out a ChatGPT image model that could instantly produce artwork in the style of Studio Ghibli, the Japanese animation studio.
This sparked a wide range of viral memes – and equally wide discussions about image copyright, in which legal analysts noted that Studio Ghibli would be limited in how it could respond to this since copyright law protects specific expression, not a specific artistic ‘style’.
Following these discussions, OpenAI announced prompt safeguards to block some user requests to generate images in the styles of living artists.
But issues over generative AI and copyright are ongoing, as highlighted by the copyright and trademark infringement case currently being heard in London’s high court.
Global photography agency Getty Images is alleging that London-based AI company Stability AI trained its image generation model on the agency’s huge archive of copyrighted pictures. Stability AI is fighting Getty’s claim and arguing that the case represents an “overt threat” to the generative AI industry.
And earlier this month, Disney and Universal announced they are suing AI firm Midjourney over its image generator, which the two companies said is a “bottomless pit of plagiarism.”
“What we hope to do with our work is to highlight the urgent need for a roadmap towards more resilient, artist-centred protection strategies,” said Foerster. “We must let creatives know that they are still at risk and collaborate with others to develop better art protection tools in future.”
Hanna Foerster is a member of Downing College, Cambridge.
Reference:
Hanna Foerster et al. ‘LightShed: Defeating Perturbation-based Image Copyright Protections.’ Paper presented at the 34th USENIX Security Symposium. https://www.usenix.org/conference/usenixsecurity25/presentation/foerster
Artists urgently need stronger defences to protect their work from being used to train AI models without their consent.
The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.
Autonomous bus trial will carry passengers between Eddington and Cambridge West

A 15-seater autonomous bus will operate between Madingley Road Park & Ride, and around the University's Eddington neighbourhood and Cambridge West Innovation District.
The early phase of the trial, following extensive virtual and on-road testing, starts on Tuesday 24 June with a limited number of morning and afternoon runs each Monday-Friday.
The trial passenger service is free and will enhance local connections, improving access to places of work and study, as well as community and sports facilities for those living and working in the area.
Dan Clarke, Head of Innovation and Technology at the Greater Cambridge Partnership, said: "This is an exciting milestone, but it’s just the beginning. People may have already seen the bus going around Eddington and Cambridge West from Madingley Park & Ride recently, as, after the extensive on-track training with the drivers, we’ve been running the bus on the road without passengers to learn more about how other road-users interact with the technology. We’re now moving gradually to the next stage of this trial by inviting passengers to use Connector.
"As with all new things, our aim is to introduce this new technology in a phased way that balances the trialling of these new systems with safety and the passenger experience. This will ensure we can learn more about this technology and showcase the potential for self-driving vehicles to support sustainable, reliable public transport across Cambridge."
The vehicle is operated by Whippet Coaches using autonomous technology from Fusion Processing.
Professor Anna Philpott, Pro-Vice-Chancellor for Resources and Operations at the University of Cambridge, said "Innovation and research that contributes to society is at the heart of the University’s mission, and this trial aligns with our vision for sustainable and pioneering transport solutions for everyone travelling to and from our sites. Cambridge West Innovation District and Eddington are fitting locations for such an ambitious and forward-thinking project."
A full-scale launch of two full-size autonomous buses on a second route to the Cambridge Biomedical Campus will begin later this year.
The Connector trial is part of a national Centre for Connected and Autonomous Vehicles (CCAV) programme backed by the UK Government to explore how autonomous buses can be safely and effectively integrated into public transport systems.
All vehicles are supported by trained safety drivers at all times and have already undergone digital simulation and rigorous on-road testing.
Find out more about Connector and check the timetable to see when you can take a ride on the bus
The Greater Cambridge Partnership’s Connector project is bringing self-driving passenger transport to the city.
The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.
Rubin Observatory reveals first images

The Rubin Observatory, jointly funded by the US National Science Foundation and the US Department of Energy’s Office of Science, has released its first imagery, showing cosmic phenomena at an unprecedented scale.
In just over 10 hours of test observations, the NSF-DOE Rubin Observatory has already captured millions of galaxies and Milky Way stars and thousands of asteroids. The imagery is a small preview of the Rubin Observatory’s upcoming 10-year scientific mission to explore and understand some of the universe's biggest mysteries.
Located on a mountaintop in Chile, the Rubin Observatory will repeatedly scan the sky for 10 years and create an ultra-wide, ultra-high-definition time-lapse record of our universe. The region in central Chile is favoured for astronomical observations because of its dry air and dark skies, and allows for an ideal view of the Milky Way’s centre.
The facility is set to achieve ‘first light,’ or make the first scientific observations of the Southern Hemisphere’s sky using its 8.4-meter Simonyi Survey Telescope, on 4 July.
UK astronomers, including from the University of Cambridge, are celebrating their role in the most ambitious sky survey to date.
“We will be looking at the universe in a way that we have never done before, and this exploration is bound to throw up surprises that we never imagined,” said Professor Hiranya Peiris from Cambridge’s Institute of Astronomy, and a builder of the Legacy Survey of Space and Time (LSST) Dark Energy Science Collaboration.
Enabled by an investment of £23 million from the Science and Technology Facilities Council (STFC), UK astronomers and software developers have been preparing the hardware and software needed to analyse the petabytes of data that the survey will produce to enable groundbreaking science that will enhance our understanding of the universe.
The UK is the second largest international contributor to the multinational project, putting UK astronomers at the forefront when it comes to exploiting this unique window on the Universe.
The UK is also playing a significant role in the management and processing of the unprecedented amounts of data. The UK will host one of three international data facilities and process around 1.5 million images, capturing around 10 billion stars and galaxies. When complete, the full 10-year survey is expected to rack up 500 petabytes of date – the same storage as half-a-million 4K Hollywood movies.
The UK’s science portal for the international community is capable of connecting around 1,500 astronomers with UK Digital Research Infrastructure to support the exploitation of this uniquely rich and detailed view of the Universe.
More than two decades in the making, Rubin is the first of its kind: its mirror design, camera size and sensitivity, telescope speed, and computing infrastructure are each in an entirely new category. Over the next 10 years, Rubin will perform the Legacy Survey of Space and Time (LSST) using the LSST Camera and the Simonyi Survey Telescope.
By repeatedly scanning the sky for 10 years, the observatory will deliver a treasure trove of discoveries: asteroids and comets, pulsating stars, and supernova explosions. Science operations are expected to start towards the end of 2025.
"I can’t wait to explore the first LSST catalogues - revealing the faintest dwarf galaxies and stellar streams swarming through the Milky Way’s halo," said Professor Vasily Belokurov from Cambridge's Institute of Astronomy, member of LSST:UK. "A new era of galactic archaeology is beginning!”
“UK researchers have been contributing to the scientific and technical preparation for the Rubin LSST for more than ten years,” said Professor Bob Mann from the University of Edinburgh, LSST:UK Project Leader. “These exciting first look images show that everything is working well and reassure us that we have a decade’s worth of wonderful data coming our way, with which UK astronomers will do great science.”
Hiranya Peiris is a Fellow of Murray Edwards College, Cambridge.
The Vera C Rubin Observatory, a new scientific facility that will bring the night sky to life like never before using the largest camera ever built, has revealed its ‘first look’ images at the start of its 10-year survey of the cosmos.
The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.
Is the secret to immortality in our DNA?
Is the secret to immortality in our DNA?

Photo by Maryam Hiradfar
Samantha Laine Perfas
Harvard Staff Writer
Alum’s campus novel offers cautionary tale to biotech culture
It’s your typical biotech love story: A couple of eager Harvard students stumble upon a brilliant scientific breakthrough in anti-aging, drop out of school to pursue their dream, experience a fast and furious rise to fame before … well, we won’t give the ending away. In “Notes on Infinity,” Austin Taylor ’21 showcases her grasp of science and love of literature. During her own time at the College, she double concentrated in English and chemistry, a decision that has served her well in writing her debut novel. The Gazette spoke with her about how her time at Harvard influenced her writing, as well as what’s next in her career. This interview has been edited for length and clarity.
In the book, Harvard students Zoe and Jack discover a new way to unlock the potential of anti-aging. You draw from many studies and chemistry principles. Does the science track? Does the secret to immortality lie in our DNA?
My first disclaimer is that while I studied chemistry, it was physical chemistry and not bio. However, it was important that I get the scientific context right. I wanted the novel to be completely plausible, so everything leading up to the actual work of the two main characters is real and I tried to communicate it accurately. So is it possible that a discovery like the one they made could be made? I think theoretically, yes. Has a similar discovery been made? No.
David Sinclair is doing some work on this. And the main discovery that they work from is the Yamanaka factors, which allow us to turn back the clock on aged cells. So if we figured out how to turn back the clock on aged cells for many cells in our body at one time, one would think that would have an anti-aging effect. But that’s the part where the science becomes fiction.
You’ve said that one theme you wanted to explore with this novel was empathy. Why is that?
One of the things that inspired the book was the many startup scandals in the news. As I followed those stories, I was struck by the simplicity of the punchy, dramatic, six-word headlines. It’s very easy to forget that those headlines are describing real people who had whole lives leading up to a set of decisions, a set of moments that resulted in these headlines being published. I don’t think that’s a good thing; I think it’s really important not to flatten people. It’s important to humanize others, and I think that the most important task of fiction — both for writers and for readers — is to produce empathy.
“When you dump extreme amounts of money on people who have good ideas, and you encourage them to ‘move fast and break things,’ the incentive structure you create is not always one that will produce good, sound science.”
How did your own experiences inform the characters in this novel?
One of the main characters, Zoe, is a young woman in STEM. I studied chemistry, so I also was a young woman in STEM. Some of the tension she experiences is the feeling of tokenization or of being perceived primarily by or through the lens of her gender. Zoe struggles throughout the book with very much just wanting to be a scientist, but feeling like she’s constantly the woman scientist, especially because she starts a biotech company. She is lauded for being a woman founder, which in some ways is great, but in other ways is very isolating and frustrating.
I certainly did not drop out and form a billion-dollar startup, but I did find myself wondering if I was being given opportunities in the sciences because I was working hard and doing good, interesting work — or if I was being given opportunities because I was a woman and a minority in the field. That is a tough feeling that a lot of minorities in various spaces experience, and it can create a lot of tension and insecurity.
You also tackled what seems to be a recurring theme in biotech start-up culture, with the often quick rise to success followed by failure. What were you attempting to explore there?
Two major scandals that happened just before and during the writing process were the rise and fall of Theranos founder Elizabeth Holmes and the collapse of FTX, the cryptocurrency exchange founded by Sam Bankman-Fried. When you put people in a high-pressure, high-stakes environment, they can behave differently than they maybe would otherwise. For the two main characters, those two environments are first Harvard College — which is very high-pressure and very high-stakes — and then the venture-capital-funded world of biotech. When you dump extreme amounts of money on people who have good ideas, and you encourage them to “move fast and break things,” the incentive structure you create is not always one that will produce good, sound science.
Chemistry and English make a unique pairing. How did concentrating in both affect your professional pursuits?
As an undergraduate, I was really excited about science. I also really loved my English classes. I remember my sophomore year when I was thinking about declaring, I nervously soft-pitched the idea to my chemistry adviser of a joint concentration, thinking he was going to say it was a silly idea. Instead, he was very excited about it. I remember walking out of that meeting feeling thrilled that it was a possibility. I had a fantastic time pursuing the two, and the combination is sort of perfect for the novel, right? I leveraged my scientific literacy during research. I drew a lot on my experiences as a chemistry student and in the lab, as well as the skills in writing and reading that I developed as an English concentrator.
Were there any faculty who were particularly helpful to you?
My advisers in the Chemistry and English departments, Greg Tucci and Daniel Donaghue, were incredibly supportive of my joint pursuit. I also took two classes in the English department — contemporary fiction and a creative writing course with Jill Abramson — that were formative for me as a writer. Jill gave me some very generous feedback and was very supportive; it was the first time that I really considered that writing could be a career for me. My Principal Investigator, Cynthia Friend, was also a great mentor and is a fabulous scientist.
And then, broadly speaking, the faculty, staff, and peers that I was surrounded by at Harvard were just brilliant and doing incredible things. It was intimidating and challenging, especially for the first few years, but being in that environment made me an immeasurably better thinker, writer, problem-solver, friend, and person. I’m deeply grateful to everyone who made up the community during the time that I was there.
What’s next for you?
I’ll be attending law school at Stanford University in the fall. I’m interested in the interface between emerging science and tech and the law. While I was writing my first novel, Chat GPT emerged, and so my legal and professional interests in the publishing space sort of dovetailed. I’m hoping to work on AI governance, particularly as it relates to art and media.
I don’t have any plans to stop writing, so I’m hoping to pursue some sort of career as an attorney or legal scholar in parallel with a career as a novelist. I’m working on my second novel currently and hope to keep writing for as long as people are interested in reading what I write.
‘Have a healthy respect that nature sometimes bites back’

Getty Images
‘Have a healthy respect that nature sometimes bites back’
It’s a bad year for ticks. Here are some precautions, and steps to take if you get bitten.
Samantha Laine Perfas
Harvard Staff Writer
Public health officials are saying this year is a particularly bad one for ticks, due to milder winters and rainy springs in many parts of the country.
In a virtual event hosted by Harvard’s T.H. Chan School of Public Health, experts from medicine, epidemiology, and environmental health came together to remind the public that these critters, which carry a range of serious diseases, can put a huge damper on summer plans if not taken seriously.
“Go outside. Enjoy nature, it’s healthy for you; but just have a healthy respect that nature sometimes bites back,” said Richard Pollack, senior environmental public health officer.
Gaurab Basu, an assistant professor in the Department of Environmental Health, said it’s important to approach tick season thoughtfully, with rates of Lyme disease and other tick-borne illnesses on the rise.
“[We need] vigilance, not panic,” he said during the event, which was livestreamed July 1.
Basu said climate change has changed tick behavior and presence — essentially, they emerge earlier and remain longer. The arachnids are most active in warmer months, but there have been sightings even in January in some areas.
“We can’t say any one case of Lyme is because of climate change,” Basu said. “But we have to understand what we’re doing to our environment, what we’re doing by burning fossil fuels and warming our planet. And the trend lines that we are creating because of it.”
Previously known hot spots are still hot, particularly in New England and the Midwest, but the areas where ticks have thrived are growing.
Cassandra Pierre, assistant professor at Boston University’s Chobanian and Avedisian School of Medicine, said cases of tick-borne Rocky Mountain spotted fever are on the rise in southern Massachusetts, including places like Cape Cod and Martha’s Vineyard. While rare, it is a life-threatening disease if not recognized and treated early.
Reported cases of tick-borne diseases, 2019-2022

“Lyme certainly does still dwarf everything else we see,” Pierre said. That being said, she continued, instances of some of the rarer tickborne illnesses — like anaplasmosis and babesiosis — have increased.
A rise in co-infection rates, which track patients infected with multiple types of tick-borne diseases, complicates the picture even further. These cases can lead to more severe symptoms, delays in treatment, and prolonged illness.
Pollack shared basic best practices for keeping safe in high-risk outdoor areas.
When hiking, stay on the path; wear light-colored pants and socks so you can more easily spot a tick (Pollack encouraged listeners to flick the tick off and “send the tick for a ride”); pretreat clothing with an EPA-registered form of permethrin, a synthetic form of what can be extracted from chrysanthemums, which is known to be safe for humans; and use EPA-registered insect repellent on exposed skin.
Also check yourself occasionally while outdoors, and again when you come inside. If you find a tick, remove it immediately with a fine-tip forceps, your fingernail, or a credit card.
“Speed is far more important than is the actual means of removing,” Pollack said. “The longer the tick is attached, the more likely it is that it will be able to transmit one of those nasty pathogens to you.”
“When ticks attach, they need to attach for a period of 24-36 hours to have the opportunity to expose their host — in this case humans — to [pathogens],” said Pierre, who is also the medical director of public health programs and an associate hospital epidemiologist at Boston Medical Center. This is why checking frequently is crucial.
The antibiotic doxycycline is available by prescription and it can reduce the risk of infection by as much as 87 percent if taken within 72 hours of the bite.
At the end of the day, ticks are part of our natural environment, Pollack said.
“Nature is a core factor in human health and public health. You’ve got to respect nature, and I think we often don’t,” Basu said. “We need to integrate this understanding of how we build our communities, what kind of energy we use, where our roads are, where we’re building into. [These] all have profound implications for public health.”
The event was moderated by Dave Epstein, a meteorologist for WGBH and a correspondent for The Boston Globe. To watch the full event, visit the Chan School’s YouTube page. For more information on Lyme disease, visit the Lyme Wellness Initiative.
At MIT, musicians make new tools for new tunes
The MIT Music Technology Program carves out a space to explore new sounds, tunes, and experiences. From the classroom to the community, students in music tech grapple with developing both creative technology and their creative selves.
In the course 21M.080 (Intro to Music Technology), it dawned on Thelonious Cooper ’25 that he had the skills to create his own instruments. “I can literally make a new instrument. I don’t think most people consider that as an option. But it totally is,” Cooper says.
Similar to how the development of photography contributed to a radical shift in the priorities of painting, Cooper identifies the potential of new music tools to “[pave] the way to find new forms of creative expression.” Cooper develops digital instruments and music software.
For Matthew Caren ’25, his parallel interests in computer science, mathematics, and jazz performance found an intersection in design. Caren explains, “the process of creating music doesn’t actually start when you, for instance, sit at a piano. It really starts when someone goes out and designs that piano and lays out the parameters for how the creation process is going to go.” When it is the tool that defines the parameters for creating art, Caren reasons, “You can tell your story only as well as the technology allows you to.”
What purposes can music technology serve? In holding both technical and artistic questions simultaneously, makers of music technology uncover new ways to approach engineering problems alongside human notions of community and beauty.
Building the bridge between music and tech
Taught by professor of the practice Eran Egozy, class 21M.385 (Interactive Music Systems, or IMS) focuses on the creation of musical experiences that include some element of human-computer interaction (HCI) through software or a hardware interface.
In their first assignment, students program a digital synthesizer, a piece of software to generate and manipulate pitches with desired qualities. While building this foundation of the application of hard technical skills to music, students contemplate their budding aesthetic and creative interests.
“How can you use it creatively? How can you make it make music in a way that’s not just a bunch of random sounds, but actually has some intention? Can you use the thing you just made to perform a little song?” prompts Egozy.
In the spirit of MIT’s motto, “mens et manus” (“mind and hand”), students of IMS propose, design, implement, play-test, and present a creative musical system of their own during the last stretch of the semester. Students develop novel music games, tools, and instruments alongside an understanding of the principles of user interface, user experience (UI/UX), and HCI.
Once students implement their ideas, they can evaluate their design. Egozy stresses it is important to develop a “working prototype” quickly. “As soon as it works, you can test it. As soon as you test it, you find out whether it's working or not, then you can adjust your design and your implementation,” he explains.
Although students receive feedback at multiple milestones, a day of play-testing is the “most focused and concentrated amount of learning [students] get in the entire class.” Students might find their design choices affirmed or their assumptions broken as peers test the limits of their creations. “It’s a very entertaining experience,” Egozy says.
Immersed in music tech since his graduate studies at the MIT Media Lab and as co-founder of Harmonix, the original developers of popular music game titles “Guitar Hero” and “Rock Band,” Egozy aims to empower more people to engage with music more deeply by creating “delightful music experiences.”
By the same token, developers of music technology deepen their understanding of music and technical skills. For Cooper, understanding the “causal factors” behind changes in sounds has helped him to “better curate and sculpt the sounds [he uses] when making music with much finer detail.”
Designing for possibility
Music technologies mark milestones in history — from the earliest acoustic instruments to the electrified realm of synthesizers and digital audio workstations, design decisions reverberate throughout the ages.
“When we create the tools that we use to make art, we design into them our understanding and our ideas about the things that we’re interested to explore,” says Ian Hattwick, lecturer in music technology.
Hattwick brings his experience as a professional musician and creative technologist as the instructor of Intro to Music Technology and class 21M.370 (Digital Instrument Design).
For Hattwick, identifying creative interests, expressing those interests by creating a tool, using the tool to create art, and then developing a new creative understanding is a generative and powerful feedback loop for an artist. But even if a tool is carefully designed for one purpose, creative users can use them unexpectedly, generating new and cascading creative possibilities on a cultural scale.
In cases of many important music hardware technologies, “the impact of the decisions didn’t play out for a decade or two,” says Hattwick. Over time, he notes, people shift their understanding of what is possible with the available instruments, pushing their expectations of technology and what music can sound like. One novel example is the relationship between drummers and drum machines — human drummers took inspiration from programmatic drum beats to learn unique, challenging rhythms.
Although designers may feel an impulse for originality, Hattwick stresses that design happens “within a context of culture.” Designers extend, transform, and are influenced by existing ideas. On the flip side, if a design is too unfamiliar, the ideas expressed risk limited impact and propagation. The current understanding of what sounds are even considered musical is in tension with the ways new tools can manipulate and generate them.
This tension leads Hattwick to put tools and the thoughtful choices of their human designers back in focus. He says, “when you use tools that other people have designed, you’re also adopting the way that they think about things. There’s nothing wrong with that. But you can make a different choice.”
Grounding his interests in the physical hardware that has backed much of music history, electrical engineering and computer science undergraduate Evan Ingoldsby builds guitar pedals and audio circuits that manipulate signals through electronic components. “A lot of modern music tech is based off of taking hardware for other purposes, like signal filters and saturators and such, and putting music and sounds through them and seeing how [they] change,” says Ingoldsby.
For Cooper, learning from history and the existing body of knowledge, both artistically and technically, unlocks more creativity. “Adding more tools to your toolbox should never stop you from building something that you want to. It can only make it easier,” he says.
Ingoldsby finds the unexpected, emergent effects of pushing hardware tools such as modular synthesizers to their limits most inspiring. “It increases in complexity, but it also increases in freedom.”
Collaboration and community
Music has always been a collective endeavor, fostering connection, ritual, and communal experiences. Advancements in music technology can both expand creative possibilities for live performers and foster new ways for musicians to gather and create.
Cooper makes a direct link between his research in high-performance, low-latency computing to his work developing real-time music tools. Many music tools can only function well “offline,” Cooper poses. “For example, you’ll record something into your digital audio workstation on your computer, and then you’ll hit a button, and it will change the way it sounds. That’s super cool. But I think it’s even cooler if you can make that real-time. Can you change what the sound is coming out as you’re playing?” asks Cooper.
The problem of speeding up the processing of sound, such that the time difference in input and output — latency — is imperceptible to human hearing, is a technical one. Cooper takes an interest in real-time timbre transfer that could, for example, change the sound coming from a saxophone as if it were coming from a cello. The problem intersects with common techniques in artificial intelligence research, he notes. Cooper’s work to improve the speed and efficiency of music software tools could provide new effects for digital music performers to manipulate audio in a live setting.
With the rise of personal computing in the 2010s, Hattwick recounts, an appeal for “laptop ensembles” emerged to contemplate new questions about live music performance in a digitizing era. “What does it mean to perform music with a laptop? Why is that fun? Is a laptop an instrument?” he poses.
In the Fabulous MIT Laptop Ensemble (FaMLE), directed by Hattwick, MIT students pursue music performance in a “living laboratory.” Driven by the interests of its members, FaMLE explores digital music, web audio, and live coding, an improvisational practice exposing the process of writing code to generate music. A member of FaMLE, Ingoldsby has found a place to situate his practice of sound design in a broader context.
When emerging digital technologies interface with art, challenging questions arise regarding human creativity. Communities made of multidisciplinary people allow for the exchange of ideas to generate novel approaches to complex problems. “Engineers have a lot to offer performers,” says Cooper. “As technology progresses, I think it’s important we use that to further develop our abilities for creative practice, instead of substituting it.”
Hattwick emphasizes, “The best way to explore this is together.”
© Photo: MIT Music and Theater Arts
Andrew Houck, national leader in quantum technology, appointed Princeton University dean of engineering
Solomons’ treasure
Solomons’ treasure

From “The Solomon Collection: Dürer to Degas and Beyond,” a detail of Three Male Heads from “The Capitulation of Madrid,” Dec. 4, 1808. Antoine-Jean Gros (Paris 1771-1835 Meudon).
Photos by Stephanie Mitchell/Harvard Staff Photographer
Anna Lamb
Harvard Staff Writer
Cambridge couple’s art collection now shines in Harvard Art Museums
For decades, scores of paintings by 20th-century masters shared shelf space with family photos, books, and knickknacks in the Cambridge home of Arthur and Marny Solomon. Works by Claude Monet, Edgar Degas, and Paul Cézanne hung on their walls. And in a carriage house turned gallery in the backyard, more contemporary works by abstractionists such as Kenneth Noland, Jules Olitzky, and Larry Poons shone.
Now, those works are on display for the public to enjoy in Harvard Art Museums’ exhibition “The Solomon Collection: Dürer to Degas and Beyond.”
“We are deeply grateful to Arthur and Marny Solomon for their careful stewardship of these artworks over many years, and for their generous impulse to share them with the Harvard Art Museums, a place in the community that was always near and dear to their hearts,” said Micha Winkler Thomas, deputy director of the Harvard Art Museums.

The Solomons were both lifelong art collectors with intricate ties to Harvard. Arthur was a professor of biophysics at Harvard Medical School, while Marny worked throughout her life as a teaching fellow for various Harvard professors after earning her A.B. in art history from Radcliffe in 1958. In 1985, after collecting both individually and as a couple for decades, the Solomons promised their collection to the Art Museums. It wasn’t until after Marny’s death in 2020 that the acquisition was made final. Arthur had passed away in 2005.
From the beginning
Arthur K. Solomon was born in 1912 in Pittsburgh into a tight-knit and wealthy Jewish family. His childhood, according to Marina Kliger, the Rousseau Curatorial Fellow in European Art and one of the curators of the Solomon exhibition, was filled with art and aesthetics. The Solomons’ crowd, including influential department storeowner Edgar J. Kaufmann, were “cultural leaders in Pittsburgh.”
Kaufmann’s son, Edgar Kaufmann Jr., would later become curator of industrial design at the Museum of Modern Art, while another neighborhood boy, A. James Speyer, would become curator of 20th-century art at the Art Institute of Chicago.
Arthur, on the other hand, would go on to study chemistry at Princeton. But he still held onto his more artistic interests, taking New York-based photographer and modern art promoter Alfred Stieglitz as a mentor. Stieglitz introduced Arthur to the New York art scene and the popular realist paintings of American artists at the time.
In 1934, Arthur came to Harvard to pursue his Ph.D. in chemistry. While there, he made the first two purchases in his collection — watercolors by American artists Edward Hopper and Charles E. Burchfield that he had first seen with Stieglitz.


Harvard helped his collection grow when he audited courses in the fine arts department. One of those was the famous “Museum Work and Museum Problems” seminar that met at both the Harvard Art Museums and in Professor Paul Sachs’ home. According to Kliger, Sachs arranged student visits to the homes of distinguished collectors in New York and Philadelphia.
“I think that was probably the most important part of my becoming a collector — seeing these great collections,” Arthur was recorded saying in a series of interviews by the Oral History Committee of Harvard Medical School.
His art collection grew throughout the ’30s, when he went to Cambridge, England, for postdoctoral work and was introduced to German art dealer Justin Thannhauser. Through Thannhauser, Arthur collected works by Van Gogh, Degas, and Cezanne.
In the 1950s and ’60s he made most of his acquisitions through the New York and London-based dealer Julius Weitzner. Arthur then took a brief hiatus from collecting after the death of his first wife, Jean, in 1963.
That was until he met Marny — a collector in her own right.
The earliest documentation of Marny collecting was in July 1962, 10 years before her marriage to Arthur. According to Kliger, it’s documented that Marny brought two works to the Department of Conservation at Harvard’s Fogg Museum: a drawing of an unspecified subject by 17th-century Italian painter Pietro Francesco Mola and a print by 17th-century Italian printmaker Stefano della Bella.

Marny was close friends with Marjorie “Jerry” Cohn — curator emerita and former acting director of the Harvard Art Museums. They met in the early 1960s, when Cohn was a conservation assistant at the Fogg. Marny would send works directly from dealers to Cohn at the museum, where she would mat and frame them. Cohn also served as a confidant on Marny’s subsequent acquisitions.
Marny mostly collected prints. When she met Arthur, however, the two began collecting a new form of art.
“When they met in the late 1960s both were already serious collectors. Arthur focused on 19th- and early 20th-century European art, while Marny was a dedicated print collector,” Kliger said. “After they married in December 1972, the Solomons experienced what they would come to describe as a ‘contemporary awakening.’”
One of their first joint purchases was in 1974, when they bought a 10-ton, 10-foot-long steel sculpture by Michael Steiner called “Betonica.”
“The Solomons installed their new acquisition in their spacious yard at 27 Craigie St., where the sculpture weathered years of New England winters and became part of the Solomons’ lives.” Kliger said
Their other purchases were displayed in their 19th-century Italianate revival home. In the early 1980s, the Solomons began running out of showing space and converted the historic carriage house on the property into a two-story art gallery.
Kliger calls the collection “three collections in one.” Between the two individual collections, and the Solomons’ joint purchases, more than 260 of their prints, paintings, and sculptures were donated to the Harvard Art Museums.
Artworks from the Solomons’ collection are on display through Aug. 17. The Harvard Art Museums are free to all, and open Tuesday through Sunday, 10 a.m. to 5 p.m.
From MIT, an instruction manual for turning research into startups
Since MIT opened the first-of-its-kind venture studio within a university in 2019, it has demonstrated how a systemic process can help turn research into impactful ventures.
Now, MIT Proto Ventures is launching the “R&D Venture Studio Playbook,” a resource to help universities, national labs, and corporate R&D offices establish their own in-house venture studios. The online publication offers a comprehensive framework for building ventures from the ground up within research environments.
“There is a huge opportunity cost to letting great research sit idle,” says Fiona Murray, associate dean for innovation at the MIT Sloan School of Management and a faculty director for Proto Ventures. “The venture studio model makes research systematic, rather than messy and happenstance.”
Bigger than MIT
The new playbook arrives amid growing national interest in revitalizing the United States’ innovation pipeline — a challenge underscored by the fact that just a fraction of academic patents ever reach commercialization.
“Venture-building across R&D organizations, and especially within academia, has been based on serendipity,” says MIT Professor Dennis Whyte, a faculty director for Proto Ventures who helped develop the playbook. “The goal of R&D venture studios is to take away the aspect of chance — to turn venture-building into a systemic process. And this is something not just MIT needs; all research universities and institutions need it.”
Indeed, MIT Proto Ventures is actively sharing the playbook with peer institutions, federal agencies, and corporate R&D leaders seeking to increase the translational return on their research investments.
“We’ve been following MIT’s Proto Ventures model with the vision of delivering new ventures that possess both strong tech push and strong market pull,” says Mark Arnold, associate vice president of Discovery to Impact and managing director of Texas startups at The University of Texas at Austin. “By focusing on market problems first and creating ventures with a supportive ecosystem around them, universities can accelerate the transition of ideas from the lab into real-world solutions.”
What’s in the playbook
The playbook outlines the venture studio model process followed by MIT Proto Ventures. MIT’s venture studio embeds full-time entrepreneurial scientists — called venture builders — inside research labs. These builders work shoulder-to-shoulder with faculty and graduate students to scout promising technologies, validate market opportunities, and co-create new ventures.
“We see this as an open-source framework for impact,” says MIT Proto Ventures Managing Director Gene Keselman. “Our goal is not just to build startups out of MIT — it’s to inspire innovation wherever breakthrough science is happening.”
The playbook was developed by the MIT Proto Ventures team — including Keselman, venture builders David Cohen-Tanugi and Andrew Inglis, and faculty leaders Murray, Whyte, Andrew Lo, Michael Cima, and Michael Short.
“This problem is universal, so we knew if it worked there’d be an opportunity to write the book on how to build a translational engine,” Keselman said. “We’ve had enough success now to be able to say, ‘Yes, this works, and here are the key components.’”
In addition to detailing core processes, the playbook includes case studies, sample templates, and guidance for institutions seeking to tailor the model to fit their unique advantages. It emphasizes that building successful ventures from R&D requires more than mentorship and IP licensing — it demands deliberate, sustained focus, and a new kind of translational infrastructure.
How it works
A key part of MIT’s venture studio is structuring efforts into distinct tracks or problem areas — MIT Proto Ventures calls these channels. Venture builders work in a single track that aligns with their expertise and interest. For example, Cohen-Tanugi is embedded in the MIT Plasma Science and Fusion Center, working in the Fusion and Clean Energy channel. His first two venture successes have been a venture using superconducting magnets for in-space propulsion and a deep-tech startup improving power efficiency in data centers.
“This playbook is both a call to action and a blueprint,” says Cohen-Tanugi, lead author of the playbook. “We’ve learned that world-changing inventions often remain on the lab bench not because they lack potential, but because no one is explicitly responsible for turning them into businesses. The R&D venture studio model fixes that.”
© Photo courtesy of MIT Proto Ventures
The Jörg G. Bucherer-Foundation donates 100 million Swiss francs to ETH Zurich for Earth observation centre
3 tech solutions to societal needs will get help moving to market
3 tech solutions to societal needs will get help moving to market

© 2020 Feinknopf Photography / Brad Feinknopf
Kirsten Mabry
Harvard Office of Technology Development
Projects targeting heart health, data demands, quantum computing win Grid Accelerator awards
Three research projects that address urgent societal challenges — cardiovascular health, rising data demands, and the future of quantum computation — have won awards from the Harvard Grid Accelerator.
The Grid Accelerator offers funding, mentorship, and hands-on venture development to help academic projects steer emerging technologies toward commercialization. The 2025 awardees:
Help managing blood pressure
A research team in the lab of Professor Katia Bertoldi — led by postdoctoral students Adel Djellouli and Giovanni Bordiga — is developing a novel soft, stent-like device that could help regulate dangerous spikes in blood pressure. Designed to respond to changes in the vascular system, the device represents a potential solution for patients living with hypertension, who often struggle to manage sudden and unpredictable blood pressure fluctuations.
Redefining data networks
In the lab of Professor Kiyoul Yang, a research team led by postdoctoral student Tianyi Zeng is developing an integrated chip-scale optical circuit switch and amplifier — technology with the potential to increase the speed and efficiency of AI data centers. Not only does this technology push hardware limits at a data center scale, expanding internet traffic capacity and enabling high-performance computing, it also shrinks these capabilities down to a size compatible with tomorrow’s miniaturized devices.
Unlocking scalable quantum processing
Yang’s lab is also collaborating with the lab of Professor Mikhail Lukin on a project that could lay the foundation for quantum computers orders of magnitude more powerful than those in use today. Led by postdoctoral fellows Brandon Grinkemeyer and Shankar Menon, the team is developing advanced optical interconnect technology — ultra-high-bandwidth links that will enable hundreds of separate quantum processors to function as one large, unified machine.
The Grid Accelerator builds on a proven track record of the Office of Technology Development Physical Sciences and Engineering Accelerator. Since 2013, projects supported by the Grid/OTD Accelerator have led to the launch of 19 startups that have collectively raised nearly half a billion dollars, along with technology licenses to established companies and sponsored research agreements. The Harvard Grid was launched as a joint initiative of the Harvard John A. Paulson School of Engineering and Applied Sciences and OTD.
“These awards exemplify Harvard’s commitment to transforming academic research into innovations with broad, real-world impact,” said Isaac Kohlberg, senior associate provost and chief technology development officer at Harvard. “By supporting promising technologies at this pivotal stage, the Grid Accelerator helps bridge the gap between discovery and meaningful societal benefit.”
SEAS Dean David Parkes also emphasized the program’s impact. “At SEAS, we are committed to fostering translational research and entrepreneurial thinking. Innovation requires the ability to pursue solutions that create meaningful change. The Grid Accelerator helps our researchers in transforming bold ideas into practical solutions that benefit society both locally and worldwide.”
Learn more about the Grid Accelerator awardee projects, previous awardees, and the mission of the Harvard Grid.
Faber appointed chief development officer for Faculty of Arts and Sciences

Michael Faber.
Credit: Scarlet Studio
Faber appointed chief development officer for Faculty of Arts and Sciences
New associate vice president and dean of development for FAS to begin Aug. 25
Michael Faber, an experienced and versatile fundraiser who has built his career in advancement roles at leading research universities, has been named the new associate vice president and dean of development for the Faculty of Arts and Sciences.
In a message to the FAS community on Tuesday, Hopi Hoekstra, Edgerley Family Dean of the FAS, and James Husson, vice president for University alumni affairs and development, noted Faber’s “distinguished record of advancing institutional priorities and leading multibillion-dollar comprehensive campaigns.”
Faber will lead FAS’s development efforts, overseeing fundraising strategy and execution in support of FAS academic priorities, beginning Aug. 25. His portfolio will include principal gifts, major gifts, Harvard College Fund, gift planning, stewardship, reunion giving, development communications, and volunteer engagement.
As an associate vice president, Faber will also partner with the alumni affairs and development leadership team in shaping and implementing University development strategies.
“I am thrilled to welcome Michael to the Faculty of Arts and Sciences,” Hoekstra said. “His exceptional track record in securing transformational support — especially for research — will be critical as we explore new ways to support our faculty and students in a rapidly changing landscape for higher education. I know he will be a valuable addition to the FAS leadership team and broader community.”
Faber returns to Harvard with deep connections to the University community and its mission, having previously served in fundraising roles where he partnered with development teams and senior faculty across Harvard’s Schools — including FAS, Harvard Medical School, and the School of Public Health (now the Harvard T.H. Chan School of Public Health).
Currently, Faber is vice president for medical and health sciences advancement at Ohio State University. There he oversees development teams for seven colleges and dozens of research institutes raising more than $300 million annually.
Previously, Faber led fundraising efforts at the University of California, San Francisco, as associate vice chancellor of development & alumni relations and worked as an adviser in the Massachusetts Institute of Technology president’s office under then-President Susan Hockfield.
“Michael brings unmatched experience and insight to this key role,” said Husson. “He returns to Harvard with highly relevant skills, honed at the highest levels of the nation’s top fundraising organizations. His proven ability to work across disciplines, combined with his deep Harvard roots, will strengthen our entire leadership team. I’m excited by the opportunity to partner with Michael as we seek to advance Harvard’s academic and societal mission in the years ahead.”
Over the course of his career, Faber has worked on strategies for creative donor engagement, optimizing principal and planned gifts, multibillion-dollar campaign planning, and complex proposal development for collaborative, interdisciplinary research initiatives.
“I am humbled to return to Harvard at this critical moment for higher education,” Faber said. “There has never been a more important time to champion philanthropic support to fortify its excellence for generations to come. I am grateful to Dean Hoekstra and Jim Husson for this opportunity to contribute to Harvard’s mission.”
Faber graduated from Rhodes College and earned a master’s in education from the Harvard Graduate School of Education. His wife, Kate, is a biotech professional and clinical researcher whom he met while living in Harvard Square. Together they have three children, Oliver, 13, Eleanor, 10, and Phineas, 6.
How China is Leading the Global ESG New Era
By Prof Lawrence Loh, Director of the Centre for Governance and Sustainability at NUS Business School and Ms Wang Zihan, Manager from the NUS Executive MBA-Chinese & Master in Public Administration and Management at NUS Business School
The problem with social media is bigger than who gets access
Governments around the world are imposing stricter rules on social media use, shifting the focus from regulating content to regulating access. Countries like Vietnam, Malaysia, Indonesia and Australia are implementing policies such as identity verification, platform licensing and age restrictions. Dr Chew Han Ei, Senior Research Fellow from the Institute of Policy Studies at the NUS Lee Kuan Yew School of Public Policy, highlights that while these measures aim to enhance online safety, especially for younger users, they risk excluding vulnerable groups, anonymous users, and those unable to verify their identity, potentially pushing them toward less regulated platforms.
Dr Chew cautions that true digital safety goes beyond gatekeeping; social media systems need to be redesigned to ensure that they can respond to failures and hold up under pressure. He added that rather than relying solely on access controls, a safer internet requires platforms to be accountable, and are equipped with robust reporting tools, responsible algorithms, responsive moderation, and interface designs that prioritise user safety and well-being.
Read more here.
Study could lead to LLMs that are better at complex reasoning
For all their impressive capabilities, large language models (LLMs) often fall short when given challenging new tasks that require complex reasoning skills.
While an accounting firm’s LLM might excel at summarizing financial reports, that same model could fail unexpectedly if tasked with predicting market trends or identifying fraudulent transactions.
To make LLMs more adaptable, MIT researchers investigated how a certain training technique can be strategically deployed to boost a model’s performance on unfamiliar, difficult problems.
They show that test-time training, a method that involves temporarily updating some of a model’s inner workings during deployment, can lead to a sixfold improvement in accuracy. The researchers developed a framework for implementing a test-time training strategy that uses examples of the new task to maximize these gains.
Their work could improve a model’s flexibility, enabling an off-the-shelf LLM to adapt to complex tasks that require planning or abstraction. This could lead to LLMs that would be more accurate in many applications that require logical deduction, from medical diagnostics to supply chain management.
“Genuine learning — what we did here with test-time training — is something these models can’t do on their own after they are shipped. They can’t gain new skills or get better at a task. But we have shown that if you push the model a little bit to do actual learning, you see that huge improvements in performance can happen,” says Ekin Akyürek PhD ’25, lead author of the study.
Akyürek is joined on the paper by graduate students Mehul Damani, Linlu Qiu, Han Guo, and Jyothish Pari; undergraduate Adam Zweiger; and senior authors Yoon Kim, an assistant professor of Electrical Engineering and Computer Science (EECS) and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); and Jacob Andreas, an associate professor in EECS and a member of CSAIL. The research will be presented at the International Conference on Machine Learning.
Tackling hard domains
LLM users often try to improve the performance of their model on a new task using a technique called in-context learning. They feed the model a few examples of the new task as text prompts which guide the model’s outputs.
But in-context learning doesn’t always work for problems that require logic and reasoning.
The MIT researchers investigated how test-time training can be used in conjunction with in-context learning to boost performance on these challenging tasks. Test-time training involves updating some model parameters — the internal variables it uses to make predictions — using a small amount of new data specific to the task at hand.
The researchers explored how test-time training interacts with in-context learning. They studied design choices that maximize the performance improvements one can coax out of a general-purpose LLM.
“We find that test-time training is a much stronger form of learning. While simply providing examples can modestly boost accuracy, actually updating the model with those examples can lead to significantly better performance, particularly in challenging domains,” Damani says.
In-context learning requires a small set of task examples, including problems and their solutions. The researchers use these examples to create a task-specific dataset needed for test-time training.
To expand the size of this dataset, they create new inputs by slightly changing the problems and solutions in the examples, such as by horizontally flipping some input data. They find that training the model on the outputs of this new dataset leads to the best performance.
In addition, the researchers only update a small number of model parameters using a technique called low-rank adaption, which improves the efficiency of the test-time training process.
“This is important because our method needs to be efficient if it is going to be deployed in the real world. We find that you can get huge improvements in accuracy with a very small amount of parameter training,” Akyürek says.
Developing new skills
Streamlining the process is key, since test-time training is employed on a per-instance basis, meaning a user would need to do this for each individual task. The updates to the model are only temporary, and the model reverts to its original form after making a prediction.
A model that usually takes less than a minute to answer a query might take five or 10 minutes to provide an answer with test-time training, Akyürek adds.
“We wouldn’t want to do this for all user queries, but it is useful if you have a very hard task that you want to the model to solve well. There also might be tasks that are too challenging for an LLM to solve without this method,” he says.
The researchers tested their approach on two benchmark datasets of extremely complex problems, such as IQ puzzles. It boosted accuracy as much as sixfold over techniques that use only in-context learning.
Tasks that involved structured patterns or those which used completely unfamiliar types of data showed the largest performance improvements.
“For simpler tasks, in-context learning might be OK. But updating the parameters themselves might develop a new skill in the model,” Damani says.
In the future, the researchers want to use these insights toward the development of models that continually learn.
The long-term goal is an LLM that, given a query, can automatically determine if it needs to use test-time training to update parameters or if it can solve the task using in-context learning, and then implement the best test-time training strategy without the need for human intervention.
This work is supported, in part, by the MIT-IBM Watson AI Lab and the National Science Foundation.
© Credit: Jose-Luis Olivares, MIT; iStock
Did Jane Austen even care about romance?
Did Jane Austen even care about romance?
Scholars contest novelist’s ‘rom-com’ rep as 250th anniversary ushers in new screen adaptations
Eileen O’Grady
Harvard Staff Writer

Illustration by Liz Zonarich/Harvard Staff
Deidre Lynch thinks everyone should read “Mansfield Park.”
Jane Austen may be best known for the romantic and witty “Pride and Prejudice,” but Lynch, Ernest Bernbaum Professor of Literature in the Department of English, wants readers to see the 19th-century novelist as more than a “rom-com writer.”
“The marriage plot is not the thing Austen is most interested in,” Lynch said. “She’s interested in how difficult it is to be a good person. She’s interested in inequality and domination, and power. She’s interested in how people who don’t have a lot of power nonetheless preserve their principles. What is independence of mind even if you don’t have financial or political independence?”
This year marks the 250th anniversary of Jane Austen’s birth — and for a woman who had to publish all of her works anonymously, she’s now more visible than ever. In addition to new editions of the novels, a fresh wave of film and TV treatments have been recently released or are in the works, including “Miss Austen,” “Jane Austen: Rise of a Genius,” and “The Other Bennet Sister” (all BBC), and “Pride and Prejudice” (Netflix).
Lynch, who teaches “Jane Austen’s Fictions and Fans,” said the novelist’s work continues to resonate in part because of her minimalist style, which makes the fiction easy to modernize. “Clueless” (1995) and “Fire Island” (2022) are two examples.
“Her plots are fairly uncluttered and unlike many other 19th-century novelists, she doesn’t spend a lot of time describing her characters or settings, so it makes it easier to slot ourselves from the 21st century into her books,” Lynch explained. “The characters are so vivid and life-like, we all feel as though we know a Mrs. Bennet or a Mr. Woodhouse or a Mr. Collins.”
Samantha Matherne, professor of philosophy, became interested in the moral, aesthetic, and epistemic themes of Austen’s work after rereading “Sense and Sensibility” a few years ago — a rediscovery that inspired her course “The Philosophy of Jane Austen.”

Samantha Matherne.
Photo by Grace DuVal

Deidre Lynch.
Photo courtesy of Deidre Lynch
Is Austen a philosopher? Not exactly, Matherne said (though it’s a question students debate in her course). Austen saw herself first and foremost as a novelist, but she explored philosophical ideas through narrative rather than formal argument.
“If you think about ‘Pride and Prejudice’ or ‘Sense and Sensibility,’ both novels are exploring the concepts that are in the titles and asking, ‘Should they have a role in one’s life?’” Matherne said. “Austen seems to say pride and prejudice are vices that get in the way of morality and knowledge — and romance! You do get Austen advocating for a picture of the good life as one in which you’re balancing sense and sensibility, as I think both the characters Elinor and Marianne come to do developmentally over the course of the novel.”
Matherne’s course also asks students to discuss whether Austen is even interested in romance. As Matherne pointed out, every novel might follow a marriage plot, but the weddings themselves get little narrative attention, if Austen even describes them at all.
“These romantic symbols of the proposal and the wedding, Austen has absolutely no interest in,” Matherne said. “She’s interested in loving relationships between couples, between friends, between communities; that’s the romance of Austen. This is why reading the novels is a different experience than watching movies, because you get the interiority of love and romance. You need words on the page to describe the rush of emotions and the ambiguity of emotions and the doubt, hope, anger, and fear.”
“She’s interested in loving relationships between couples, between friends, between communities; that’s the romance of Austen.”
Samantha Matherne
That is why “Mansfield Park,” Matherne and Lynch agree, is the perfect book for digging into Austen’s heavier themes. Written right after “Pride and Prejudice,” the novel has a less charismatic heroine and takes a darker direction on issues that aren’t typically associated with Austen: class, inequality, power, and the slave trade, referenced through the sugar plantation in Antigua that sustains the Bertram family’s fortune.
“Students get really interested in the ways in which Austen is commenting on the history of empire and slavery and race,” Lynch said. “Many of them end up saying, ‘“Mansfield Park” is absolutely my favorite,’ because of the ways in which it takes on these questions of power.”
“Students get really interested in the ways in which Austen is commenting on the history of empire and slavery and race.”
Deidre Lynch
“The focus of ‘Mansfield Park’ is really diffuse. It roams around the different characters and dynamics,” Matherne agreed. “Austen is trying to give us a novel of a social world rather than the novel of one character or one romantic pairing.”
For the uninitiated, Lynch recommended starting with “Pride and Prejudice” as the most accessible entry point before moving to the other novels, not forgetting the “Juvenilia,” a collection of pieces Austen wrote as a teenager.
“I do hope that anybody who starts with ‘Pride and Prejudice’ goes on to all the others as well,” said Lynch, who encourages students to read all six Austen novels every year. “She’s the person who convincingly figured out what the novel form could do and could be and wrote to improve it. She’s a totally brilliant novelist.”
Why are women twice as likely to develop Alzheimer’s as men?
Why are women twice as likely to develop Alzheimer’s as men?

Andrzej Wojcicki/Getty Images
Alvin Powell
Harvard Staff Writer
Researchers focusing on chromosomes, menopause
A neglected piece of the Alzheimer’s puzzle has been getting increased scientific attention: why women are twice as likely as men to develop the disease.
One might be tempted to explain the disparity as a natural consequence of women living longer. But those studying the disease say that wouldn’t account for such a large difference, and they’re not precisely sure what would.
While many factors may be at play, researchers are zeroing in on two where the biological differences between women and men are clear: chromosomes and menopause.
Women have two X chromosomes, and men have an X and a Y. Differences between genes held on the X and Y chromosomes, researchers say, may give women an increased chance of developing Alzheimer’s.
Menopause, when production of the hormones estrogen and progesterone declines, is another clear difference between the sexes. Those hormones are widely known for their roles in the reproductive system, but estrogen also acts on the brain, researchers say.

Researchers Rachel Buckley (left) and Anna Bonkhoff.
Photos by Veasey Conway/Harvard Staff Photographer
Whatever’s at play is likely part of deeper neurological processes, researchers say, pointing to similar sex-related differences in other conditions. Multiple sclerosis and migraine, for example, are both more common in women. Parkinson’s disease, brain tumors, and epilepsy, by contrast, are more common in men. In some cases — like migraine in women and Parkinson’s in men — increased severity accompanies increased incidence.
“Epidemiologically, we see that for almost all neurological diseases, there are differences in how many biological women and men are affected,” said Anna Bonkhoff, resident and research fellow in neurology at Harvard Medical School and Mass General Brigham. “There’s a tendency, for example, in MS and migraine for more females to be affected, while it’s the contrary for brain tumors and Parkinson’s. Just based on these numbers, you get the feeling that something needs to underlie these differences in terms of the biology.”
The basic building blocks are genes, which in humans are arranged on 46 chromosomes, organized into 23 pairs. One of those pairs — XX in women and XY in men — contain the genes that define sex-based characteristics, differences that are key areas of exploration.
The X and Y chromosomes differ significantly, Bonkhoff said.
The X chromosome is rich in genes, while the Y chromosome has lost a significant number over the millennia. Having two X chromosomes, though, doesn’t mean that women have a double dose of the proteins and other gene products produced by those genes, because one of the X chromosomes is silenced.
That silencing, however, is imperfect, Bonkhoff said, leaving some genes on the silenced X chromosome active. Studies have shown that genes on the X chromosome are related to the immune system, brain function, and Alzheimer’s disease.
“We know that biological men and women differ by the number of X chromosomes,” said Bonkhoff, lead author of a recent review article in the journal Science Advances that examined sex-related differences in Alzheimer’s disease and stroke.
“A lot of genes for the immune system and regulating brain structure are located on the X chromosome, so the dosages differ to certain degrees between men and women. That seems to have an effect.”
“If we can find ways to incorporate sex difference to optimize the treatment for individuals, both men and women, that is the overarching goal.”
Anna Bonkhoff
Another key difference between men and women relates to their hormones. All humans have three sex hormones: estrogen, progesterone, and testosterone. In women, estrogen and progesterone dominate, while in men testosterone dominates. When one looks at changes between men and women with respect to hormones and aging, menopause is a significant nexus over the course of a lifetime.
“Menopause is part of the puzzle, probably one of the bigger ones,” Bonkhoff said. “I’m not saying it’s the only one — aging is relevant by itself, and there’s a lot of interesting research looking at what aging does to the immune system that seems to have implications for cognitive changes.”
Women typically go through menopause from their mid-40s to mid-50s. During that time, their ovaries stop producing estrogen and progesterone, resulting in the characteristic symptoms of menopause, like hot flashes, emotional changes, cessation of menstruation, difficulty sleeping, among others.
In March, Rachel Buckley, associate professor of neurology at Harvard Medical School, and her colleagues followed that hormonal thread in a study that examined the impact of hormone replacement therapy and the accumulation of the protein tau in the brain, a key characteristic of Alzheimer’s disease.
Buckley, who is also an investigator in neurology at Massachusetts General Hospital, found that women who were receiving hormone replacement therapy later in life, after age 70, had significantly higher levels of tau accumulation, and suffered higher cognitive decline.
The result, she said, supports the “timing” approach to hormone therapy, which holds that hormone replacement therapy can safely be used to ease the symptoms of menopause, but should not be continued into old age.
The timing theory arose in response to a study by the federally funded Women’s Health Initiative in the early 2000s, which showed an association between women taking hormone replacement therapy and increased cognitive decline. That was contrary to expectations from earlier studies that indicated estrogen had protective effects on cognition.
Later studies, however, showed that hormone therapy appeared to be protective in younger women but was associated with declining cognition in women age 65 and up.
Buckley’s research took that work a step further, linking it to physiological changes in the brain. Alzheimer’s disease involves the accumulation of amyloid beta into characteristic plaques in the brain — considered an important hallmark of the condition. Those plaques spur the development of tangles of a protein called tau, which then sparks damaging inflammation.
Buckley’s research showed that hormone therapy among older women was associated with an increase in tau and with cognitive decline. It was not associated with an increase in amyloid beta, which today is a common therapeutic target.
“We’re trying to see if we can set up a new study design where we can really look at the time of menopause.”
Rachel Buckley
The research, published in the journal Science Advances in March and funded in part by the National Institute on Aging, allowed Buckley, Gillian Coughlan, first author and instructor in neurology, and their colleagues to highlight the role of hormone replacement in the accumulation of tau tangles in older women. But Buckley said the study also highlights significant areas where work remains to be done.
The database used for the study didn’t have information about variables that may be important, such as a woman’s reproductive history, information on when the replacement therapy was initiated, and the length of hormone therapy use.
Understanding the importance of that missing data, Buckley said, is a step forward even though the fact that it’s missing limits the conclusions that can be made in her study. To remedy that, Buckley is planning her own study that will gather what she believes is all the pertinent data, including reproductive history, and details of hormone therapy use.
“We work with a lot of secondary data that already exists, and that’s great but there are limitations to what we can do with it,” Buckley said. “We’re trying to see if we can set up a new study design where we can really look at the time of menopause, what is changing in the blood, what is changing in the brain, what is changing in cognition, and how that might be associated with later life risk.”
Sussing out how biological sex affects risk of Alzheimer’s disease, Bonkhoff and Buckley said, can help us understand Alzheimer’s more generally. That understanding, they said, has the potential to lead to new pathways of treatment and prevention of a disease that, despite decades of research and encouraging recent progress, is still poorly understood.
“It’s an important aim in medicine to understand and then to innovate in how we can prevent or treat,” Bonkhoff said. “If we can find ways to incorporate sex difference to optimize the treatment for individuals, both men and women, that is the overarching goal.”
IT Summit focuses on balancing AI challenges and opportunities

Michael D. Smith and Klara Jelinkova at the IT Summit.
Photo by Neal Adolph Akatsuka
IT Summit focuses on balancing AI challenges and opportunities
With the tech here to stay, Michael Smith says professors, students must become sophisticated users
Roselyn Hobbs
Harvard Correspondent
Exploring the critical role of technology in advancing Harvard’s mission and the potential of generative AI to reshape the academic and operational landscape were the key topics discussed during University’s 12th annual IT Summit. Hosted by the CIO Council, the June 11 event attracted more than 1,000 Harvard IT professionals.
“Technology underpins every aspect of Harvard,” said Klara Jelinkova, vice president and University chief information officer, who opened the event by praising IT staff for their impact across the University.
That sentiment was echoed by keynote speaker Michael D. Smith, the John H. Finley Jr. Professor of Engineering and Applied Sciences and Harvard University Distinguished Service Professor, who described “people, physical spaces, and digital technologies” as three of the core pillars supporting Harvard’s programs.
In his address, “You, Me, and ChatGPT: Lessons and Predictions,” Smith explored the balance between the challenges and the opportunities of using generative AI tools. He pointed to an “explainability problem” in generative AI tools and how they can produce responses that sound convincing but lack transparent reasoning: “Is this answer correct, or does it just look good?” Smith also highlighted the challenges of user frustration due to bad prompts, “hallucinations,” and the risk of overreliance on AI for critical thinking, given its “eagerness” to answer questions.
In showcasing innovative coursework from students, Smith highlighted the transformative potential of “tutorbots,” or AI tools trained on course content that can offer students instant, around-the-clock assistance. AI is here to stay, Smith noted, so educators must prepare students for this future by ensuring they become sophisticated, effective users of the technology.
Asked by Jelinkova how IT staff can help students and faculty, Smith urged the audience to identify early adopters of new technologies to “understand better what it is they are trying to do” and support them through the “pain” of learning a new tool. Understanding these uses and fostering collaboration can accelerate adoption and “eventually propagate to the rest of the institution.”
The spirit of innovation and IT’s central role at Harvard continued throughout the day’s programming, which was organized into four pillars:
- Teaching, Learning, and Research Technology included sessions where instructors shared how they are currently experimenting with generative AI, from the Division of Continuing Education’s “Bot Club,” where instructors collaborate on AI-enhanced pedagogy, to the deployment of custom GPTs and chatbots at Harvard Business School.
- Innovation and the Future of Services included sessions onAI video experimentation, robotic process automation, ethical implementation of AI, and a showcase of the University’s latest AI Sandbox features.
- Infrastructure, Applications, and Operations featured a deep dive on the extraordinary effort to bring the new David Rubenstein Treehouse conference center to life, including testing new systems in a physical “sandbox” environment and deploying thousands of feet of network cabling.
- And the Skills, Competencies, and Strategies breakout sessions reflected on the evolving skillsets required by modern IT — from automation design to vendor management — and explored strategies for sustaining high-functioning, collaborative teams, including workforce agility and continuous learning.
Amid the excitement around innovation, the summit also explored the environmental impact of emerging technologies. In a session focused on Harvard’s leadership in IT sustainability — as part of its broader Sustainability Action Plan — presenters explored how even small individual actions, like crafting more effective prompts, can meaningfully reduce the processing demands of AI systems. As one panelist noted, “Harvard has embraced AI, and with that comes the responsibility to understand and thoughtfully assess its impact.”
MIT chemists boost the efficiency of a key enzyme in photosynthesis
During photosynthesis, an enzyme called rubisco catalyzes a key reaction — the incorporation of carbon dioxide into organic compounds to create sugars. However, rubisco, which is believed to be the most abundant enzyme on Earth, is very inefficient compared to the other enzymes involved in photosynthesis.
MIT chemists have now shown that they can greatly enhance a version of rubisco found in bacteria from a low-oxygen environment. Using a process known as directed evolution, they identified mutations that could boost rubisco’s catalytic efficiency by up to 25 percent.
The researchers now plan to apply their technique to forms of rubisco that could be used in plants to help boost their rates of photosynthesis, which could potentially improve crop yields.
“This is, I think, a compelling demonstration of successful improvement of a rubisco’s enzymatic properties, holding out a lot of hope for engineering other forms of rubisco,” says Matthew Shoulders, the Class of 1942 Professor of Chemistry at MIT.
Shoulders and Robert Wilson, a research scientist in the Department of Chemistry, are the senior authors of the new study, which appears this week in the Proceedings of the National Academy of Sciences. MIT graduate student Julie McDonald is the paper’s lead author.
Evolution of efficiency
When plants or photosynthetic bacteria absorb energy from the sun, they first convert it into energy-storing molecules such as ATP. In the next phase of photosynthesis, cells use that energy to transform a molecule known as ribulose bisphosphate into glucose, which requires several additional reactions. Rubisco catalyzes the first of those reactions, known as carboxylation. During that reaction, carbon from CO2 is added to ribulose bisphosphate.
Compared to the other enzymes involved in photosynthesis, rubisco is very slow, catalyzing only one to 10 reactions per second. Additionally, rubisco can also interact with oxygen, leading to a competing reaction that incorporates oxygen instead of carbon — a process that wastes some of the energy absorbed from sunlight.
“For protein engineers, that’s a really attractive set of problems because those traits seem like things that you could hopefully make better by making changes to the enzyme’s amino acid sequence,” McDonald says.
Previous research has led to improvement in rubisco’s stability and solubility, which resulted in small gains in enzyme efficiency. Most of those studies used directed evolution — a technique in which a naturally occurring protein is randomly mutated and then screened for the emergence of new, desirable features.
This process is usually done using error-prone PCR, a technique that first generates mutations in vitro (outside of the cell), typically introducing only one or two mutations in the target gene. In past studies on rubisco, this library of mutations was then introduced into bacteria that grow at a rate relative to rubisco activity. Limitations in error-prone PCR and in the efficiency of introducing new genes restrict the total number of mutations that can be generated and screened using this approach. Manual mutagenesis and selection steps also add more time to the process over multiple rounds of evolution.
The MIT team instead used a newer mutagenesis technique that the Shoulders Lab previously developed, called MutaT7. This technique allows the researchers to perform both mutagenesis and screening in living cells, which dramatically speeds up the process. Their technique also enables them to mutate the target gene at a higher rate.
“Our continuous directed evolution technique allows you to look at a lot more mutations in the enzyme than has been done in the past,” McDonald says.
Better rubisco
For this study, the researchers began with a version of rubisco, isolated from a family of semi-anaerobic bacteria known as Gallionellaceae, that is one of the fastest rubisco found in nature. During the directed evolution experiments, which were conducted in E. coli, the researchers kept the microbes in an environment with atmospheric levels of oxygen, creating evolutionary pressure to adapt to oxygen.
After six rounds of directed evolution, the researchers identified three different mutations that improved the rubisco’s resistance to oxygen. Each of these mutations are located near the enzyme’s active site (where it performs carboxylation or oxygenation). The researchers believe that these mutations improve the enzyme’s ability to preferentially interact with carbon dioxide over oxygen, which leads to an overall increase in carboxylation efficiency.
“The underlying question here is: Can you alter and improve the kinetic properties of rubisco to operate better in environments where you want it to operate better?” Shoulders says. “What changed through the directed evolution process was that rubisco began to like to react with oxygen less. That allows this rubisco to function well in an oxygen-rich environment, where normally it would constantly get distracted and react with oxygen, which you don’t want it to do.”
In ongoing work, the researchers are applying this approach to other forms of rubisco, including rubisco from plants. Plants are believed to lose about 30 percent of the energy from the sunlight they absorb through a process called photorespiration, which occurs when rubisco acts on oxygen instead of carbon dioxide.
“This really opens the door to a lot of exciting new research, and it’s a step beyond the types of engineering that have dominated rubisco engineering in the past,” Wilson says. “There are definite benefits to agricultural productivity that could be leveraged through a better rubisco.”
The research was funded, in part, by the National Science Foundation, the National Institutes of Health, an Abdul Latif Jameel Water and Food Systems Lab Grand Challenge grant, and a Martin Family Society Fellowship for Sustainability.
© Credit: Jose-Luis Olivares, MIT
Professor Emeritus Barry Vercoe, a pioneering force in computer music, dies at 87
MIT Professor Emeritus Barry Lloyd Vercoe, a pioneering force in computer music, a founding faculty member of the MIT Media Lab, and a leader in the development of MIT’s Music and Theater Arts Section, passed away on June 15. He was 87.
Vercoe’s life was a rich symphony of artistry, science, and innovation that led to profound enhancements of musical experience for expert musicians as well as for the general public — and especially young people.
Born in Wellington, New Zealand, on July 24, 1937, Vercoe earned bachelor’s degrees in music (in 1959) and mathematics (in 1962) from the University of Auckland, followed by a doctor of musical arts in music composition from the University of Michigan in 1968.
After completing postdoctoral research in digital audio processing at Princeton University and a visiting lectureship at Yale University, Vercoe joined MIT’s Department of Humanities (Music) in 1971, beginning a tenure in the department that lasted through 1984. During this period, he played a key role in advancing what would become MIT’s Music and Theater Arts (MTA) Section, helping to shape its forward-thinking curriculum and interdisciplinary philosophy. Vercoe championed the integration of musical creativity with scientific inquiry, laying the groundwork for MTA’s enduring emphasis on music technology and experimental composition.
In 1973, Vercoe founded MIT’s Experimental Music Studio (EMS) — the Institute’s first dedicated computer music facility, and one of the first in the world. Operated under the auspices of the music program, EMS became a crucible for innovation in algorithmic composition, digital synthesis, and computer-assisted performance. His leadership not only positioned MIT as a hub for music technology, but also influenced how the Institute approached the intersection of the arts with engineering. This legacy is honored today by a commemorative plaque in the Kendall Square MBTA station.
Violist, faculty founder of the MIT Chamber Music Society, and Institute Professor Marcus Thompson says: “Barry was first and foremost a fine musician, and composer for traditional instruments and ensembles. As a young professor, he taught our MIT undergraduates to write and sing Renaissance counterpoint as he envisioned how the act of traditional music-making offered a guide to potential artistic interaction between humans and computers. In 1976, he enlisted me to premiere what became his iconic, and my most-performed, work, ‘Synapse for Viola and Computer.’”
During a Guggenheim Fellowship in 1982–83, Vercoe developed the Synthetic Performer, a groundbreaking real-time interactive accompaniment system, while working closely with flautist Larry Beauregard at the Institute for Research and Coordination in Acoustics/Music (IRCAM) in Paris.
In 1984, Vercoe became a founding faculty member of the MIT Media Lab, where he launched the Music, Mind, and Machine group. His research spanned machine listening, music cognition, and real-time digital audio synthesis. His Csound language, created in 1985, is still widely used for music programming, and his contributions helped define the MPEG-4 Structured Audio standard.
He also served as associate academic head of the Media Lab’s graduate program in Media Arts and Sciences (MAS). Vercoe mentored many future leaders in digital music and sound computation, including two of his MAS graduate students — Anna Huang SM ’08 and Paris Smaragdis PhD ’01 — who have recently joined MIT’s music faculty, and Miller Puckette, an emeritus faculty member at the University of California at San Diego, and Richard Boulanger, a professor of electronic production and design at the Berklee College of Music.
“Barry Vercoe will be remembered by designers, developers, researchers, and composers for his greatest ‘composition,’ Csound, his free and open-source software synthesis language,” states Boulanger. “I know that, through Csound, Barry’s musical spirit will live on, not only in my teaching, my research, and my music, but in the apps, plugins, and musical compositions of generations to come.”
Tod Machover, faculty director of the MIT Media Lab and Muriel R. Cooper Professor of Music and Media, reflects, “Barry Vercoe was a giant in the field of computer music whose innovations in software synthesis, interactive performance, and educational tools for young people influenced and inspired many, including myself. He was a superb mentor, always making sure that artistic sensibility drove music tech innovation, and that sophisticated expression was at the core of Media Lab — and MIT — culture.”
Vercoe’s work earned numerous accolades. In addition to the Guggenheim Fellowship, he was also honored with the 1992 Computerworld Smithsonian Award for innovation and the 2004 SEAMUS Lifetime Achievement Award.
Beyond MIT, Vercoe consulted with Analog Devices and collaborated with international institutions like IRCAM under the direction of Pierre Boulez. His commitment to democratizing music technology was evident in his contributions to the One Laptop per Child initiative, which brought accessible digital sound tools to young people in underserved communities worldwide.
He is survived by his former wives, Kathryn Veda Vaughn and Elizabeth Vercoe; their children, Andrea Vercoe and Scott Vercoe; and generations of students and collaborators who continue to build on his groundbreaking work. A memorial service for family will be held in New Zealand later this summer, and a special event in his honor will take place at MIT in the fall. The Media Lab will share details about the MIT gathering as they become available.
Named professor emeritus at the MIT Media Lab upon his retirement in 2010, Vercoe’s legacy embodies the lab’s — and MIT’s — vision of creative, ethical, interdisciplinary research at the convergence of art, science, and technology. His music, machines, and generously inventive spirit will continue to forever shape the way we listen, learn, and communicate.
© Photo: MIT Libraries
Meditation provides calming solace — except when it doesn’t
Meditation provides calming solace — except when it doesn’t
Researchers find ways to promote altered states of consciousness, reduce risks of distress that affect some
Jacob Sweet
Harvard Staff Writer

Meditation is ascendant in the U.S.
Clinicians recommend the practice to treat anxiety and depression without the risk of drug dependency, and millions practice meditation alone or on retreats. In 2022, the National Institutes of Health found that 17.3 percent of U.S. adults meditated, up from 7.5 percent two decades before.
Its effects are largely positive, shown to alleviate stress, anxiety, and depression. Neuroimaging studies have explored the neurobiological effects that lead to improved self-awareness, emotional regulation, and attentional control.
“These kinds of experiences are surprisingly widespread.”
Matthew Sacchet
But not every experience with meditation provides solace. Matthew Sacchet, director of the meditation research program at Harvard Medical School, has determined in recent studies that the practice can create suffering in some cases, an issue that deserves greater attention from researchers and clinicians.
Meditation can lead to altered states of consciousness that many experience as mystical, spiritual, energetic, or magical. While often described in traditional meditation manuals, these experiences — including out-of-body experiences and changes in perceived size — are largely overlooked in modern scientific literature.
“These kinds of experiences are surprisingly widespread,” said Sacchet, who is also an associate professor at HMS.
In a 2024 paper, he and his colleagues, including first author Malcolm Wright of Massey University, found 45 percent of participants reported non-pharmacologically induced meditation-related altered states at least once in their lives. While the episodes were mostly positive, Sacchet was surprised by how often they weren’t — and how little those instances were discussed.
“There was evidence of an epidemic of subsequent suffering,” he said, with 13 percent of people reporting moderate or greater suffering outcomes from their experiences.
“There was evidence of an epidemic of subsequent suffering.”
Matthew Sacchet
With more people experimenting with meditation and other potentially reality-shifting practices, he said that clinical professionals weren’t taking negative experiences seriously or were unaware they were happening.
To examine the scope and impact of meditation-related altered states, he and his colleagues used survey data from more than 3,000 people to determine the risk factors for meditation-related altered states of consciousness and subsequent suffering.
They also studied how religious practice, mental health status, and other variables shaped these experiences.
In a 2025 paper published in the academic journal Clinical Psychological Science, the researchers studied predictors for meditation-related altered states and subsequent challenges related to these experiences.
Among the factors that they studied, the three strongest predictors of meditation-related altered states were attempted divine, magical, or occult practices; past psychedelic use; and contemplation of mysteries.
“If you try to distort reality, you might succeed,” said Sacchet. “And if you’ve taken psychedelics, then you’re more likely to have these kinds of experiences.”
Those same factors, along with total time in spiritual or meditative practice outside retreats, also increased distress beyond the typical levels that followed altered states.
These possible negative outcomes included perceptual changes, fear, distorted emotions or thoughts, and significant distress that sometimes even required clinical intervention.
Other practices — like mindfulness of the body and compassionate loving-kindness meditation — made meditation-related altered states more common but didn’t disproportionately increase suffering.
Certain factors made reality-distorting experiences, positive and negative, less likely. Prayer, for example, made people 40 percent less likely to experience them.
That was another surprise, Sacchet said, “and perhaps welcome news for conservative religious communities that wish to avoid these experiences while encouraging engagement with prayer.”
Sacchet also found that meditation-related altered states of consciousness weren’t associated with any religious or spiritual traditions but rather with specific practices. Identifying as a Buddhist didn’t have a meaningful impact on likelihood, but practicing mindfulness of the body did. Praying lowered the incidence of these experiences, but being Christian had no effect.
And while meditation retreats have become far more popular in recent years, Sacchet and his colleagues found they had little effect on the overall frequency of meditation-related altered states across the study population — although altered states did still occur. It was practice outside of retreats that more increased people’s risks.
“The finding was almost completely unexpected.”
Matthew Sacchet
“The finding was almost completely unexpected,” he said.
By highlighting the prevalence of negative encounters with altered states, Sacchet hopes to improve people’s experiences with meditation-related practices. The more clinicians and practitioners are familiar with the possible negative repercussions of altered states of consciousness, the better they can talk through and cope with such experiences.
“It’s important to enable those who have these experiences to realize that they are not alone,” he said. “They should be able to talk about them without being regarded as crazy, and to better integrate the experiences into their worldview, while being appropriately supported by clinicians.”
Sacchet also emphasized that having difficult, challenging, and negative experiences while meditating, or in general, isn’t necessarily a bad thing.
“I think we need to push against the sentiment that anything not experienced as a positive is to be avoided,” he said. “Real growth may happen when facing such challenges, and we’re actively investigating this possibility.”
In future research with Harvard’s Meditation Research Program, Sacchet hopes to explore the risk profiles of particular meditation-related altered states of consciousness and study whether certain groups of people have different kinds of experiences with meditation-related practices.
It’s part of Sacchet’s hope to bring scientific rigor to a field that has long been understudied by academics. “Of course, now we know that these experiences are not unusual at all,” he said, “and too important to be ignored by science.”
New postdoctoral fellowship program to accelerate innovation in health care
The MIT Health and Life Sciences Collaborative (MIT HEALS) is launching the Biswas Postdoctoral Fellowship Program to advance the work of outstanding early-career researchers in health and life sciences. Supported by a gift from the Biswas Family Foundation, the program aims to help apply cutting-edge research to improve health care and the lives of millions.
The program will support exceptional postdocs dedicated to innovation in human health care through a full range of pathways, such as leveraging AI in health-related research, developing low-cost diagnostics, and the convergence of life sciences with such areas as economics, business, policy, or the humanities. With initial funding of $12 million, five four-year fellowships will be awarded for each of the next four years, starting in early 2026.
“An essential goal of MIT HEALS is to find new ways and opportunities to deliver health care solutions at scale, and the Biswas Family Foundation shares our commitment to scalable innovation and broad impact. MIT is also in the talent business, and the foundation’s gift allows us to bring exceptional scholars to campus to explore some of the most pressing issues in human health and build meaningful connections across academia and industry. We look forward to welcoming the first cohort of Biswas Fellows to MIT,” says MIT president Sally Kornbluth.
“We are deeply honored to launch this world-class postdoctoral fellows program,” adds Anantha P. Chandrakasan, MIT’s chief innovation and strategy officer and head of MIT HEALS. “We fully expect to attract top candidates from around the globe to lead innovative cross-cutting projects in AI and health, cancer therapies, diagnostics, and beyond. These fellows will be selected through a rigorous process overseen by a distinguished committee, and will have the opportunity to collaborate with our faculty on the most promising and impactful ideas.”
Angela Koehler, faculty lead of MIT HEALS, professor in MIT’s Department of Biological Engineering, and associate director of the Koch Institute for Integrative Cancer Research, emphasized that the objectives of MIT HEALS align well with a stated goal of the Biswas Family Foundation: to leverage “scientific and technological advancements to revolutionize health care and make a lasting impact on global public health.”
“Health care is a team sport,” Koehler says. “MIT HEALS seeks to create connections involving investigators with diverse expertise across the Institute to tackle the most transformative problems impacting human health. Members of the MIT community are well poised to participate in teams and make an impact.”
MIT HEALS also seeks to maximize its effectiveness by expanding collaboration with medical schools and hospitals, starting with defining important problems that can be approached through research, and continuing all the way to clinical studies, Koehler says.
The Biswas Family Foundation has already demonstrated a similar strategy.
“The Biswas family has a history of enabling connections and partnerships between institutions that each bring a piece to the puzzle,” Koehler says. “This could be a dataset, an algorithm, an agent, a technology platform, or patients.”
Hope Biswas, co-founder of the Biswas Family Foundation with her husband, MIT alumnus Sanjit Biswas SM ’05, also highlighted the synergies between the foundation and MIT.
“The Biswas Family Foundation is proud to support the MIT HEALS initiative, which reimagines how scientific discovery can translate into real-world health impact. Its focus on promoting interdisciplinary collaboration to find new solutions to challenges in health care aligns closely with our mission to advance science and technology to improve health outcomes at scale,” Biswas says.
“As part of this commitment,” Biswas adds, “we are especially proud to support outstanding postdoctoral scholars focused on high-impact cross-disciplinary work in fields such as computational biology, nanoscale therapeutics, women’s health, and fundamental, curiosity-driven life sciences research. We are excited to contribute to an effort that brings together cutting-edge science and a deep commitment to translating knowledge into action.”
AI and machine-learning systems present a new universe of opportunities to investigate disease, biological mechanisms, therapeutics, and health care delivery using huge datasets.
“AI and computational systems biology can improve the accuracy of diagnostic approaches, enable the development of precision medicines, improve choices related to individualized treatment strategy, and improve operational efficiency within health care systems,” says Koehler. “Sanjit and Hope’s support of broad initiatives in AI and computational systems biology will help MIT researchers explore a variety of paths to impact human health on a large scale.”
Frontiers in health-related research are increasingly found where diverse fields converge, and Koehler provides the example of how advances in high-throughput experimentation to develop large datasets “may couple well with the development of new computation or AI tools.” She adds that the four-year funding term provided by the postdoctoral fellowship is “long enough to enable fellows to think big and take on projects at interfaces, emerging as bilingual researchers at the end of the program.”
Chandrakasan sees potential in the program for the Biswas Fellows to make revolutionary progress in health research.
“I’m incredibly grateful to the Biswas Family Foundation for their generous support in enabling transformative research at MIT,” Chandrakasan says.
© Photo courtesy of the Biswas family.
Exploring data and its influence on political behavior
Data and politics are becoming increasingly intertwined. Today’s political campaigns and voter mobilization efforts are now entirely data-driven. Voters, pollsters, and elected officials are relying on data to make choices that have local, regional, and national impacts.
A Department of Political Science course offers students tools to help make sense of these choices and their outcomes.
In class 17.831 (Data and Politics), students are introduced to principles and practices necessary to understand electoral and other types of political behavior. Taught by associate professor of political science Daniel Hidalgo, students use real-world datasets to explore topics like election polling and prediction, voter turnout, voter targeting, and shifts in public opinion over time.
The course wants students to describe why and how the use of data and statistical methods has changed electoral politics, understand the basic principles of social science statistics, and analyze data using modern statistical computing tools. The course capstone is an original project that involves the collection, analysis, and interpretation of original survey data used in modern campaigns.
“I wanted to create an applied, practice-based course that would appeal to undergraduates and provide a foundation for parsing, understanding, and reporting on large datasets in politics,” says Hidalgo, who redesigned the course for the spring 2025 semester.
Hidalgo, who also works in the Political Methodology Lab at MIT, investigates the political economy of elections, campaigns, and representation in developing democracies, especially in Latin America, as well as quantitative methods in the social sciences.
Politics and modernity
The influence of, and access to, artificial intelligence and large language models makes a course like Data and Politics even more important, Hidalgo says. “You have to understand the people at the other end of the data,” he argues.
The course also centers the human element in politics, exploring conflict, bias, their structures, and impacts while also working to improve information literacy and coherent storytelling.
“Data analysis and collection will never be perfect,” Hidalgo says. “But analyzing and understanding who holds which ideas, and why, and using the information to tell a coherent story is valuable in politics and elsewhere.”
The “always on” nature of news and related content, coupled with the variety of communications channels available to voters, has increased the complexity of the data collection process in polling and campaigns. “In the past, people would answer the phone when you called their homes,” Hidalgo notes, describing analog methods previously used to collect voter data. Now, political scientists, data analysts, and others must contend with the availability of streaming content, mobile devices, and other channels comprising a vast, fractured media ecosystem.
The course opens a window into what happens behind the scenes of local and national political campaigns, which appealed to second-year political science major Jackson Hamilton. “I took this class hoping to expand my ability to use coding for political science applications, and in order to better understand how political models and predictions work,” he says.
“We tailor-made our own sets of questions and experimental designs that we thought would be interesting,” Hamilton adds. “I found that political issues that get a lot of media coverage are not necessarily the same issues which divide lawmakers, at least locally.”
Transparency and accountability in politics and other areas
Teaching students to use tools like polling and data analysis effectively can improve their ability to identify and combat disinformation and misinformation. “As a political scientist, I’m substantively engaged,” Hidalgo says, “and I’d like to help others be engaged, too.”
“There’s lots of data available, and this course provides a foundation and the resources necessary to understand and visualize it,” Hidalgo continues. “The ability to design, implement, and understand surveys has value inside and outside the classroom.”
In politics, Hidalgo believes equipping students to navigate these spaces effectively can potentially improve and increase civic engagement. Data, he says, can help defend ideas. “There’s so much information, it’s important to develop the skills and abilities necessary to understand and visualize it,” he says. “This has value for everyone.”
Second-year physics major Sean Wilson, who also took the class this spring, notes the value of data visualization and analysis both as a potential physicist and a voter. “Data analysis in both politics and in physics is essential work given that voting tendencies, public opinion, and government leadership change so often in the United States,” he says, “and that modeling can be used to support physical hypotheses and improve our understanding of how things work.”
For Wilson, the course can help anyone interested in understanding large groups’ behaviors. “Political scientists are constantly working to better understand how and why certain events occur in U.S. politics, and data analysis is an effective tool for doing so,” he says. “Members of a representative democracy can make better decisions with this kind of information.”
Hamilton, meanwhile, learned more about the behind-the-scenes machinery at work in electoral politics. “I had the opportunity to create a couple of budget trade-off questions, to get a sense of what people actually thought the government should spend money on when they had to make choices,” he says.
“Computer science and data science aren’t just useful for STEM applications; data science approaches can also be extremely useful in many social sciences,” Hamilton argues.
“[Hidalgo helped me realize] that I needed to understand and use data science approaches to gain a deeper understanding of my areas of interest,” Hamilton says. “He focuses on how different approaches in coding can be applied to different types of problems in political science.”
© Photo: Hanley Valentin
New postdoctoral fellowship program to accelerate innovation in health care
The MIT Health and Life Sciences Collaborative (MIT HEALS) is launching the Biswas Postdoctoral Fellowship Program to advance the work of outstanding early-career researchers in health and life sciences. Supported by a gift from the Biswas Family Foundation, the program aims to help apply cutting-edge research to improve health care and the lives of millions.
The program will support exceptional postdocs dedicated to innovation in human health care through a full range of pathways, such as leveraging AI in health-related research, developing low-cost diagnostics, and the convergence of life sciences with such areas as economics, business, policy, or the humanities. With initial funding of $12 million, five four-year fellowships will be awarded for each of the next four years, starting in early 2026.
“An essential goal of MIT HEALS is to find new ways and opportunities to deliver health care solutions at scale, and the Biswas Family Foundation shares our commitment to scalable innovation and broad impact. MIT is also in the talent business, and the foundation’s gift allows us to bring exceptional scholars to campus to explore some of the most pressing issues in human health and build meaningful connections across academia and industry. We look forward to welcoming the first cohort of Biswas Fellows to MIT,” says MIT president Sally Kornbluth.
“We are deeply honored to launch this world-class postdoctoral fellows program,” adds Anantha P. Chandrakasan, MIT’s chief innovation and strategy officer and head of MIT HEALS. “We fully expect to attract top candidates from around the globe to lead innovative cross-cutting projects in AI and health, cancer therapies, diagnostics, and beyond. These fellows will be selected through a rigorous process overseen by a distinguished committee, and will have the opportunity to collaborate with our faculty on the most promising and impactful ideas.”
Angela Koehler, faculty lead of MIT HEALS, professor in MIT’s Department of Biological Engineering, and associate director of the Koch Institute for Integrative Cancer Research, emphasized that the objectives of MIT HEALS align well with a stated goal of the Biswas Family Foundation: to leverage “scientific and technological advancements to revolutionize health care and make a lasting impact on global public health.”
“Health care is a team sport,” Koehler says. “MIT HEALS seeks to create connections involving investigators with diverse expertise across the Institute to tackle the most transformative problems impacting human health. Members of the MIT community are well poised to participate in teams and make an impact.”
MIT HEALS also seeks to maximize its effectiveness by expanding collaboration with medical schools and hospitals, starting with defining important problems that can be approached through research, and continuing all the way to clinical studies, Koehler says.
The Biswas Family Foundation has already demonstrated a similar strategy.
“The Biswas family has a history of enabling connections and partnerships between institutions that each bring a piece to the puzzle,” Koehler says. “This could be a dataset, an algorithm, an agent, a technology platform, or patients.”
Hope Biswas, co-founder of the Biswas Family Foundation with her husband, MIT alumnus Sanjit Biswas SM ’05, also highlighted the synergies between the foundation and MIT.
“The Biswas Family Foundation is proud to support the MIT HEALS initiative, which reimagines how scientific discovery can translate into real-world health impact. Its focus on promoting interdisciplinary collaboration to find new solutions to challenges in health care aligns closely with our mission to advance science and technology to improve health outcomes at scale,” Biswas says.
“As part of this commitment,” Biswas adds, “we are especially proud to support outstanding postdoctoral scholars focused on high-impact cross-disciplinary work in fields such as computational biology, nanoscale therapeutics, women’s health, and fundamental, curiosity-driven life sciences research. We are excited to contribute to an effort that brings together cutting-edge science and a deep commitment to translating knowledge into action.”
AI and machine-learning systems present a new universe of opportunities to investigate disease, biological mechanisms, therapeutics, and health care delivery using huge datasets.
“AI and computational systems biology can improve the accuracy of diagnostic approaches, enable the development of precision medicines, improve choices related to individualized treatment strategy, and improve operational efficiency within health care systems,” says Koehler. “Sanjit and Hope’s support of broad initiatives in AI and computational systems biology will help MIT researchers explore a variety of paths to impact human health on a large scale.”
Frontiers in health-related research are increasingly found where diverse fields converge, and Koehler provides the example of how advances in high-throughput experimentation to develop large datasets “may couple well with the development of new computation or AI tools.” She adds that the four-year funding term provided by the postdoctoral fellowship is “long enough to enable fellows to think big and take on projects at interfaces, emerging as bilingual researchers at the end of the program.”
Chandrakasan sees potential in the program for the Biswas Fellows to make revolutionary progress in health research.
“I’m incredibly grateful to the Biswas Family Foundation for their generous support in enabling transformative research at MIT,” Chandrakasan says.
© Photo courtesy of the Biswas family.
New models improve predictions of snow, rock and ice avalanches
S’pore, NUS need to stay open to be a ‘sanctuary for global talent’
- CNA, 3 July 2025
- 8world Online, 3 July 2025
- Channel U News, 3 July 2025
- The Straits Times, 4 July 2025, Singapore, pA12
- Lianhe Zaobao, 4 July 2025, Singapore, p9
- Money 89.3FM, 4 July 2025
- Oli 96.8FM, 4 July 2025
- Tamil Murasu, 5 July 2025, p2
Study shows how a common fertilizer ingredient benefits plants
Lanthanides are a class of rare earth elements that in many countries are added to fertilizer as micronutrients to stimulate plant growth. But little is known about how they are absorbed by plants or influence photosynthesis, potentially leaving their benefits untapped.
Now, researchers from MIT have shed light on how lanthanides move through and operate within plants. These insights could help farmers optimize their use to grow some of the world’s most popular crops.
Published today in the Journal of the American Chemical Society, the study shows that a single nanoscale dose of lanthanides applied to seeds can make some of the world’s most common crops more resilient to UV stress. The researchers also uncovered the chemical processes by which lanthanides interact with the chlorophyll pigments that drive photosynthesis, showing that different lanthanide elements strengthen chlorophyll by replacing the magnesium at its center.
“This is a first step to better understand how these elements work in plants, and to provide an example of how they could be better delivered to plants, compared to simply applying them in the soil,” says Associate Professor Benedetto Marelli, who conducted the research with postdoc Giorgio Rizzo. “This is the first example of a thorough study showing the effects of lanthanides on chlorophyll, and their beneficial effects to protect plants from UV stress.”
Inside plant connections
Certain lanthanides are used as contrast agents in MRI and for applications including light-emitting diodes, solar cells, and lasers. Over the last 50 years, lanthanides have become increasingly used in agriculture to enhance crop yields, with China alone applying lanthanide-based fertilizers to nearly 4 million hectares of land each year.
“Lanthanides have been considered for a long time to be biologically irrelevant, but that’s changed in agriculture, especially in China,” says Rizzo, the paper’s first author. “But we largely don’t know how lanthanides work to benefit plants — nor do we understand their uptake mechanisms from plant tissues.”
Recent studies have shown that low concentrations of lanthanides can promote plant growth, root elongation, hormone synthesis, and stress tolerance, but higher doses can cause harm to plants. Striking the right balance has been hard because of our lack of understanding around how lanthanides are absorbed by plants or how they interact with root soil.
For the study, the researchers leveraged seed coating and treatment technologies they previously developed to investigate the way the plant pigment chlorophyll interacts with lanthanides, both inside and outside of plants. Up until now, researchers haven’t been sure whether chlorophyll interacts with lanthanide ions at all.
Chlorophyll drives photosynthesis, but the pigments lose their ability to efficiently absorb light when the magnesium ion at their core is removed. The researchers discovered that lanthanides can fill that void, helping chlorophyll pigments partially recover some of their optical properties in a process known as re-greening.
“We found that lanthanides can boost several parameters of plant health,” Marelli says. “They mostly accumulate in the roots, but a small amount also makes its way to the leaves, and some of the new chlorophyll molecules made in leaves have lanthanides incorporated in their structure.”
This study also offers the first experimental evidence that lanthanides can increase plant resilience to UV stress, something the researchers say was completely unexpected.
“Chlorophylls are very sensitive pigments,” Rizzo says. “They can convert light to energy in plants, but when they are isolated from the cell structure, they rapidly hydrolyze and degrade. However, in the form with lanthanides at their center, they are pretty stable, even after extracting them from plant cells.”
The researchers, using different spectroscopic techniques, found the benefits held across a range of staple crops, including chickpea, barley, corn, and soybeans.
The findings could be used to boost crop yield and increase the resilience of some of the world’s most popular crops to extreme weather.
“As we move into an environment where extreme heat and extreme climate events are more common, and particularly where we can have prolonged periods of sun in the field, we want to provide new ways to protect our plants,” Marelli says. “There are existing agrochemicals that can be applied to leaves for protecting plants from stressors such as UV, but they can be toxic, increase microplastics, and can require multiple applications. This could be a complementary way to protect plants from UV stress.”
Identifying new applications
The researchers also found that larger lanthanide elements like lanthanum were more effective at strengthening chlorophyll pigments than smaller ones. Lanthanum is considered a low-value byproduct of rare earths mining, and can become a burden to the rare earth element (REE) supply chain due to the need to separate it from more desirable rare earths. Increasing the demand for lanthanum could diversify the economics of REEs and improve the stability of their supply chain, the scientists suggest.
“This study shows what we could do with these lower-value metals,” Marelli says. “We know lanthanides are extremely useful in electronics, magnets, and energy. In the U.S., there’s a big push to recycle them. That’s why for the plant studies, we focused on lanthanum, being the most abundant, cheapest lanthanide ion.”
Moving forward, the team plans to explore how lanthanides work with other biological molecules, including proteins in the human body.
In agriculture, the team hopes to scale up its research to include field and greenhouse studies to continue testing the results of UV resilience on different crop types and in experimental farm conditions.
“Lanthanides are already widely used in agriculture,” Rizzo says. “We hope this study provides evidence that allows more conscious use of them and also a new way to apply them through seed treatments.”
The research was supported by the MIT Climate Grand Challenge and the Office for Naval Research.
© Credit: iStock
Robotic probe quickly measures key properties of new materials
Scientists are striving to discover new semiconductor materials that could boost the efficiency of solar cells and other electronics. But the pace of innovation is bottlenecked by the speed at which researchers can manually measure important material properties.
A fully autonomous robotic system developed by MIT researchers could speed things up.
Their system utilizes a robotic probe to measure an important electrical property known as photoconductance, which is how electrically responsive a material is to the presence of light.
The researchers inject materials-science-domain knowledge from human experts into the machine-learning model that guides the robot’s decision making. This enables the robot to identify the best places to contact a material with the probe to gain the most information about its photoconductance, while a specialized planning procedure finds the fastest way to move between contact points.
During a 24-hour test, the fully autonomous robotic probe took more than 125 unique measurements per hour, with more precision and reliability than other artificial intelligence-based methods.
By dramatically increasing the speed at which scientists can characterize important properties of new semiconductor materials, this method could spur the development of solar panels that produce more electricity.
“I find this paper to be incredibly exciting because it provides a pathway for autonomous, contact-based characterization methods. Not every important property of a material can be measured in a contactless way. If you need to make contact with your sample, you want it to be fast and you want to maximize the amount of information that you gain,” says Tonio Buonassisi, professor of mechanical engineering and senior author of a paper on the autonomous system.
His co-authors include lead author Alexander (Aleks) Siemenn, a graduate student; postdocs Basita Das and Kangyu Ji; and graduate student Fang Sheng. The work appears today in Science Advances.
Making contact
Since 2018, researchers in Buonassisi’s laboratory have been working toward a fully autonomous materials discovery laboratory. They’ve recently focused on discovering new perovskites, which are a class of semiconductor materials used in photovoltaics like solar panels.
In prior work, they developed techniques to rapidly synthesize and print unique combinations of perovskite material. They also designed imaging-based methods to determine some important material properties.
But photoconductance is most accurately characterized by placing a probe onto the material, shining a light, and measuring the electrical response.
“To allow our experimental laboratory to operate as quickly and accurately as possible, we had to come up with a solution that would produce the best measurements while minimizing the time it takes to run the whole procedure,” says Siemenn.
Doing so required the integration of machine learning, robotics, and material science into one autonomous system.
To begin, the robotic system uses its onboard camera to take an image of a slide with perovskite material printed on it.
Then it uses computer vision to cut that image into segments, which are fed into a neural network model that has been specially designed to incorporate domain expertise from chemists and materials scientists.
“These robots can improve the repeatability and precision of our operations, but it is important to still have a human in the loop. If we don’t have a good way to implement the rich knowledge from these chemical experts into our robots, we are not going to be able to discover new materials,” Siemenn adds.
The model uses this domain knowledge to determine the optimal points for the probe to contact based on the shape of the sample and its material composition. These contact points are fed into a path planner that finds the most efficient way for the probe to reach all points.
The adaptability of this machine-learning approach is especially important because the printed samples have unique shapes, from circular drops to jellybean-like structures.
“It is almost like measuring snowflakes — it is difficult to get two that are identical,” Buonassisi says.
Once the path planner finds the shortest path, it sends signals to the robot’s motors, which manipulate the probe and take measurements at each contact point in rapid succession.
Key to the speed of this approach is the self-supervised nature of the neural network model. The model determines optimal contact points directly on a sample image — without the need for labeled training data.
The researchers also accelerated the system by enhancing the path planning procedure. They found that adding a small amount of noise, or randomness, to the algorithm helped it find the shortest path.
“As we progress in this age of autonomous labs, you really do need all three of these expertise — hardware building, software, and an understanding of materials science — coming together into the same team to be able to innovate quickly. And that is part of the secret sauce here,” Buonassisi says.
Rich data, rapid results
Once they had built the system from the ground up, the researchers tested each component. Their results showed that the neural network model found better contact points with less computation time than seven other AI-based methods. In addition, the path planning algorithm consistently found shorter path plans than other methods.
When they put all the pieces together to conduct a 24-hour fully autonomous experiment, the robotic system conducted more than 3,000 unique photoconductance measurements at a rate exceeding 125 per hour.
In addition, the level of detail provided by this precise measurement approach enabled the researchers to identify hotspots with higher photoconductance as well as areas of material degradation.
“Being able to gather such rich data that can be captured at such fast rates, without the need for human guidance, starts to open up doors to be able to discover and develop new high-performance semiconductors, especially for sustainability applications like solar panels,” Siemenn says.
The researchers want to continue building on this robotic system as they strive to create a fully autonomous lab for materials discovery.
This work is supported, in part, by First Solar, Eni through the MIT Energy Initiative, MathWorks, the University of Toronto’s Acceleration Consortium, the U.S. Department of Energy, and the U.S. National Science Foundation.
© Credit: iStock
NUS champions youth-led Regen Asia Summit to revitalise ecosystems and communities
The two-day Regen Asia Summit (RAS) 2025, part of the National University of Singapore’s (NUS) 120th anniversary celebrations, was launched on 4 July 2025 as a dynamic platform to address environmental and social concerns through regeneration, which focuses on healing current environmental degradation and strengthening social resilience by nurturing resilient ecosystems. A total of 600 bright young minds, including 31 student leaders from the ASEAN region, gathered at the Summit to discuss strategies for restoring, renewing and revitalising ecosystems.
The participants, together with around 100 impact leaders and innovators from around the world – including thought leaders from indigenous communities – discussed ways to use interdisciplinary frameworks to address socio-environmental problems and foster intergenerational solutions. By promoting meaningful dialogue and idea-sharing, RAS aims to empower youths to sharpen skills and build lasting intergenerational networks for regeneration.
Held in connection with the NUS College (NUSC) course, Regeneration: Paradigm & Practice for Planetary Health, which equips students with the basic principles and concepts of regeneration, RAS embodies a bottom-up approach, with students leading the charge as conveners and creators of regional conversations. The Summit is fully organised by a team of 31 students from the ASEAN region, who make up the International Student Executive Committee. Led by NUS final-year students Misha Rajendran and Brendan Toh from the Faculty of Arts and Social Sciences, the Committee conceptualised and brought the Summit’s vision to life, overseeing all aspects from programming and speaker outreach to logistics, sponsorship procurement, design, media, and engagement.
Ms Rajendran shared, “This Summit brought together young people from across Asia, united by a shared commitment to both people and the planet. Bringing this vision to life was only possible through the passion and creativity of the student leaders, supported every step of the way by dedicated staff. Leading this diverse team, spanning different geographies, disciplines, and lived experiences, was an incredible privilege. Together, we created a space where intergenerational voices and innovative ideas could flourish, inspiring holistic solutions to restore the health of our ecosystems and societies. This experience has shown me that when young people unite around a shared vision, there is truly no limit to what we can achieve.”
Centred on the theme “Intergenerational Collaboration for Regeneration”, RAS features fireside chats and close to 50 panel discussions, showcases and workshops exploring regeneration across six core domains – Culture & Society, Ecology, Economy, Governance & Civil Society, Built Environment, and Life & Wellbeing. RAS’ innovative approach aims to move the conversation beyond sustaining the present to actively restoring, healing, and revitalising ecosystems and communities.
A highlight of RAS was a fireside chat featuring President of the Republic of Singapore Mr Tharman Shanmugaratnam, who inspired young leaders with valuable insights on the evening of July 4. President Tharman later joined participants at the Summit’s Impact Leaders Dinner for further interaction.
President Tharman, who is also NUS Chancellor, shared, “The Regen Asia Summit is an important new platform for student leaders across the region. It enables them to envision paths to development that preserve societies’ balance with nature, while advancing resilience and the well-being of vulnerable communities.”
The Summit is the latest in a line-up of programmes and events to celebrate NUS’ 120th anniversary this year, commemorating a legacy, forged over generations, of excellence, innovation and service. NUS President Tan Eng Chye said, “NUS is proud to host the inaugural Regen Asia Summit which brings together passionate students and thought leaders from around the world. This is a very impressive ground-up effort by a team of over 30 student leaders from the ASEAN region who collaborated across borders and joined hands to discuss environmental protection and social resilience. Their deep commitment to cross-cultural exchange will inspire more young people towards regional solidarity and youth-driven leadership in tackling pressing global challenges. We are confident the Summit will inspire positive change and grow into a powerful force for good – both in the region and beyond.”
As a starting point to engage diverse stakeholders, encourage collaboration across Asia, advocate for action, and shape policy, RAS sets the stage for the upcoming Asian Undergraduate Symposium (AUS) 2025, where ideas are translated into impact projects for a chance to secure seed grants. Organised as the NUS-AUN (ASEAN University Network) Summer Camp, the annual AUS brings together 300 undergraduates from over 50 ASEAN partner universities each year and will take place between 7 and 19 July 2025 at NUS College.
Global risk vulnerabilities an alarming concern, signalling the imperative need for a more united and multilateral response
Multilateral institutions and society at large are dangerously unprepared for the most critical and interconnected risks threatening humanity. If left unaddressed, these risks have the potential to exacerbate geopolitical tensions, societal discord, crisis response challenges and much more.
This is the key takeaway from the inaugural 2024 United Nations’ Global Risk Report (UNGRR), which identifies a clear set of “Global Risk Vulnerabilities” – risks rated as highly important by respondents yet multilateral preparedness is gravely lacking.
The report is based on a global survey conducted in 2024 of over 1,100 experts and stakeholders across governments, industry, academia, and civil society, and offers a snapshot of how they perceive global risks and assess the multilateral system’s readiness to address the risks. The LRF Institute for the Public Understanding of Risk (IPUR) at the National University of Singapore was the survey partner to the UN, leading the design of the survey methodology and conducting the analysis, contributing invaluable insights that helped shape key findings of the report.
Mis- and disinformation was the top global risk vulnerability followed by three high-priority risk clusters, each belonging to a single risk category and all regarded as crucial but under-addressed by current global systems:
1. Digital and cyber risks such as cybersecurity breakdown and AI
2. Health and social risks such as pandemics and mass migration
3. Resource/environmental risks such as natural hazards and biodiversity loss
When asked to identify which stakeholder group would be best placed to act on each global risk vulnerability, respondents overwhelmingly said joint action by multiple governments was the most effective response. Joint action between governments and civil society, and joint action by governments and the private sector, also consistently ranked as top choices.
“This report makes clear that our world is not just facing isolated risks, but a web of vulnerabilities that are deeply interconnected and under-addressed. It’s also clear that the solutions lie not in silos but collaborative action. It is imperative that we move beyond fragmented responses and invest in more agile, inclusive, and collaborative approaches to safeguard our future,” said Professor Leonard Lee, IPUR Director and contributing author of the UNGRR.
Singapore’s deep interconnections to international trade and financial networks mean that it is particularly important for the country to follow trends in global risks, as any disruptions or shifts in these networks such as economic downturns, geopolitical tensions, and technological changes can have significant and immediate impacts on the country’s economy, stability, and long-term growth prospects.
Dr Olivia Jensen, Deputy Director at IPUR and contributing author of the UNGRR, said, “As an individual, thinking about rising global risks can leave us feeling fearful and helpless but the UNGRR suggests that the critical factor shaping outcomes is how we respond when risks materialise rather than the risks themselves. In times of great uncertainty about global trends, there is even more value in engaging as citizens and employees to signal to our leaders which actions we want them to take.”
The report further outlines how interconnections between risks frequently amplify vulnerabilities. For example, climate change drives migration and political tension, technological advancements can widen inequality gaps, and mis- and disinformation can erode social cohesion and trust in governing institutions. These feedback loops can trigger cascading crises, overwhelming global and regional systems.
Insights from the report were used to develop four foresight scenarios, potentially unfolding between now and 2050. These scenarios demonstrate the potential for “global breakdown” if vulnerabilities remain unaddressed but also highlight the promise of “global breakthrough” through urgent, cooperative, and targeted action on these vulnerabilities.
Other key findings from the report:
- Across all regions, environmental risks emerged as the highest priority, with climate change inaction and large-scale pollution ranking at the top.
- Climate change inaction was seen as a strong driver of biodiversity decline, resource shortages, natural hazards, and mass migration.
- The biggest perceived barriers to action were weak governance/coordination, lack of political consensus, and low trust/accountability.
- Regional vulnerabilities showed that some areas, especially lower-income regions, faced acute threats from economic shocks, while others grappled with the destabilising impact of technology or geopolitics.
NUS researchers honoured as visionaries shaping Asia’s future
Two exceptional NUS researchers have been named in Tatler’s Gen.T Leaders of Tomorrow 2025, a prestigious list celebrating young changemakers across Asia who are shaping the future of their fields.
The NUS honourees, Associate Professor Benjamin Tee and Assistant Professor Jocelyn Chew Han Shi, embody the spirit of innovation and public impact, translating cutting-edge science into real-world solutions that improve lives.
Assoc Prof Benjamin Tee: Blazing trails as an ecosystem builder, entrepreneur and materials scientist
Associate Professor Benjamin Tee from the Department of Materials Science and Engineering at the College of Design and Engineering at NUS is recognised for his entrepreneurial impact and contributions in nurturing startups.
In his role as Vice President (Ecosystem Building) at NUS Enterprise, Assoc Prof Tee drives innovation platforms and programmes such as the National Graduate Research Innovation Programme (GRIP), BLOCK71 Global Network, and the NUS Overseas Colleges programme. These platforms give tech and deeptech founders the boost they need to turn their early ideas into reality.
Balancing his role in shaping vibrant startup ecosystem in NUS, Singapore and beyond, Assoc Prof Tee has also made his mark as a successful serial entrepreneur. In 2019, he co-founded TwoPlus Fertility with a partner, combining his expertise in medical technology and passion for entrepreneurship. The company aims to assist over a million couples through at-home fertility aids, from supplements to test kits, in their journey towards parenthood.
Assoc Prof Tee has also co-founded three other start-ups: Privi Medical (acquired), Hannah Life Technologies and Tacniq.AI.
In the lab, Assoc Prof Tee is charting new frontiers in electronic skin and intelligent sensor technologies. He leads the Sensors.AI Labs, where his research bridges materials science, electronics, and advanced technologies to build devices inspired by human skin—capable of sensing, healing, and adapting.
His innovations include the development of world’s fastest sensing electronic skins, brightest stretchable and self-healing lighting device, and high-performance sensors for applications in human-robot interactions.
Assoc Prof Tee’s achievements in scientific research have earned him prestigious international recognitions, including the World Economic Forum’s Singapore Young Scientist of the year in 2019, the National Research Foundation Fellowship in 2017, and the MIT Technology Review’s TR35 Innovator (Global) in 2015. In 2021, his team’s work on healthcare sensors emerged as the International Winner of the James Dyson Foundation Prize — marking Singapore’s first global win in the award’s 17-year history.
“It is an incredible honour to be named among Tatler Gen.T’s Leaders of Tomorrow 2025 list. This recognition reflects my collaborative spirit to drive research, innovation, and enterprise from Singapore to the world. I am excited to continue advancing solutions that pushes the frontiers of technology and deliver meaningful impact on society,” said Assoc Prof Tee.
Read the citation on Assoc Prof Benjamin Tee here.
Asst Prof Jocelyn Chew Han Shi: Driving digital behavioural health and servant leadership
Assistant Professor Jocelyn Chew from the NUS Alice Lee Centre for Nursing Studies (NUS Nursing) has been recognised for her visionary work at the intersection of digital health, behavioural science, and nursing innovation.
“Being named a Tatler Gen.T Leader of Tomorrow is both an honour and a reminder of the responsibility we carry to shape a healthier, more equitable future,” she said. “At NUS, I’m privileged to work at the intersection of science, innovation, and care—where bold ideas can translate into real-world impact for the communities we serve.”
Asst Prof Chew’s academic journey at NUS is marked by three key themes: translational research, personal growth, and community inspiration.
In the realm of translational research, Asst Prof Chew has led interdisciplinary work in obesity management and cardiometabolic health. One of her flagship innovations is the Modu© app (formerly known as eTRIP) and the LIGHTER programme, which uses behavioural science, digital phenotyping and novel counselling techniques to encourage healthier lifestyle habits. In addition, she founded the Singapore Nursing Innovation Group, set up a nurse-led translational service, and has consistently worked to build platforms that empower students and clinicians to address unmet clinical needs.
According to Asst Prof Chew, her personal and professional growth has been deeply shaped by the dynamic academic environment at NUS. Balancing roles as a scientist and mother of two young children, she is grateful for her family’s support and expresses a deeper appreciation for work-life integration and servant leadership in academia.
Asst Prof Chew draws constant inspiration from her students, mentors, and colleagues across disciplines. “The collegiality and diversity of thought at NUS make it a truly energising place to work and grow,” she added.
She expressed her appreciation to colleagues and mentors including Professor Dean Ho, Adjunct Professor Ngiam Kee Yuan, Associate Professor Shefaly Shorey, Professor Wang Wenru, Professor Nick Sevdalis, Professor Roger Foo and Professor Liaw Sok Ying (Head of NUS Nursing), for generously sharing their wisdom and providing unwavering support over the years.
Read the citation of Asst Prof Jocelyn Chew here.
Intelligent wound dressing controls inflammation
NUS researchers develop novel material for water quality monitoring device
Clean, safe water is vital for human health and well-being. It also plays a critical role in our food security, supports high-tech industries, and enables sustainable urbanisation. However, detecting contamination quickly and accurately remains a major challenge in many parts of the world. A groundbreaking new device developed by researchers at the National University of Singapore (NUS) has the potential to significantly advance water quality monitoring and management.
Taking inspiration from the biological function of the oily protective layer found on human skin, a team of researchers led by Associate Professor Benjamin Tee from the Department of Materials Science and Engineering in the College of Design and Engineering at NUS translated this concept into a versatile material, named ReSURF, capable of spontaneously forming a water-repellent interface. This new material, which can be prepared through a rapid micro-phase separation approach, autonomously self-heals and can be recycled. The researchers incorporated the material into a device known as a triboelectric nanogenerator (TENG), which uses the energy from the movement of water droplets to create an electric charge. The resulting device (ReSURF sensor) can be applied as a water quality monitor.
“The ReSURF sensor can detect various pollutants, such as oils and fluorinated compounds, which are challenging for many existing sensors. This capability, together with unique features such as self-powered, self-healing, reusability and recyclability, positions ReSURF as a sustainable solution for real-time, on-site, and sustainable water quality monitoring,” said Assoc Prof Tee.
The team’s design of the ReSURF material and performance of the novel water quality sensor were published in the scientific journal Nature Communications on 1 July 2025.
Rapid and sustainable water quality sensing
Existing water quality monitoring technologies such as electrochemical sensors, optical detection systems, and biosensors are effective in certain specific applications, such as detecting heavy metals, phosphorus, and microbial pollution.
However, these technologies often face limitations including slow response, high costs, reliance on external reagents or power sources, limited reusability, and the need for bulky laboratory equipment or specialised instrumentation.
The ReSURF sensor developed by the NUS team effectively overcomes these challenges, particularly in on-site real-time water quality sensing. The self-powered device has demonstrated the ability to detect water contaminants in approximately 6 milliseconds (i.e. around 40 times faster than a blink of the eye).
Additionally, the ReSURF sensor is designed to be self-healing and recyclable, making it a sustainable and low-maintenance solution. Being stretchable and transparent, the material can be easily integrated into flexible platforms, including soft robotics and wearable electronics, setting it apart from conventional sensing materials.
Furthermore, the ReSURF material applied as a sensor offers an environmentally friendly solution as it can be easily recycled due to its solubility in solvents, enabling it to be reused in new devices without suffering a loss in performance.
ReSURF sensor: How it works
The ReSURF sensor monitors water quality by analysing the electrical signals generated when analytes — such as salts, oils, or pollutants — in the water droplets, contact its surface. When water droplets containing analytes strike the water-repellent surface of the sensor, they spread out and slide off quickly, generating electric charges within milliseconds. The magnitude and characteristics of the signal generated would vary according to the composition and concentration of the analytes present. By monitoring these signals in real time, the ReSURF sensor can rapidly and accurately assess water quality without the need for external power sources.
To demonstrate its capabilities, the researchers tested the ReSURF sensor on a pufferfish-like soft robot in detecting oil in water and perfluorooctanoic acid – a common contaminant found in water sources. The test produced promising results with both contaminants producing different voltage signals, providing a proof-of-concept that the ReSURF sensor can be used in early surveillance of possible contamination.
Safeguarding water quality
The ReSURF sensor offers broad application potential. It can be deployed in rivers, lakes, and reservoirs to enable early surveillance of pollutants, allowing for quick response to water contamination emergencies. In agriculture, it is capable of monitoring water safety in areas like rice fields. In industrial settings and sewage treatment plants, the ReSURF sensor could provide valuable insights for wastewater management.
Next steps
The research team hopes to optimise the ReSURF sensor by enhancing the specificity of pollutant detection, integrating wireless data transmission capabilities, and scaling the system for long-term or large-scale environmental monitoring. Additionally, the researchers plan to explore more eco-friendly material alternatives to enhance sustainability and align with evolving environmental regulations.
“Future iterations could integrate additional sensing modalities or machine learning–based signal analysis to enable more precise identification and classification of pollutants. We envision this platform as a foundation for the development of more intelligent and responsive water quality monitoring systems,” said Assoc Prof Tee.
Optimising green transport systems with smart tools: A mission to power a sustainable future
As countries race towards achieving net-zero emissions through renewable energy adoption, research plays a pivotal role in shaping how we harness these greener energy sources to power our cities, move people, and manage resources. Professor Dipti Srinivasan from the Department of Electrical and Computer Engineering at the College of Design and Engineering at NUS is combining her passion for artificial intelligence (AI) with a deep commitment to sustainability by developing smart tools that make clean technologies -- like electric buses and renewable energy -- not just viable, but efficient and scalable.
Her journey began with a simple yet powerful question: How can AI solve real-world energy problems? Over time, this curiosity evolved into a focused mission — to help society reduce its reliance on fossil fuels by making renewable energy sources, such as solar and wind, more reliable and accessible.
“I wanted to find smart, data-driven ways to help integrate renewable energy sources better into our power systems and support a cleaner, more sustainable future,” Prof Srinivasan explained.
A data-driven vision for greener cities
Prof Srinivasan’s current research investigates how computational intelligence — drawing on nature-inspired methods like neural networks and evolutionary algorithms — can optimise renewable energy integration and electrified transport systems.
Computational tools are particularly useful in harnessing complex systems, such as city-wide electric bus networks or national power grids, to provide insights for planning and balancing supply and demand, as well as supporting decision-making under constraints such as battery capacity or power grid limits.
Prof Srinivasan and her team leverage on evolutionary computation, which mimics natural selection, to find different solutions by keeping the best-performing ones and improving them over time — just like how nature evolves stronger species. The research team applies this technique to determine the best locations and sizes for battery storage, so that energy is stored and delivered efficiently across the power grid.
Smarter charging, smarter fleets
In a study last year, Prof Srinivasan and Dr Can Bark Saner, who is a research fellow from the Department of Mathematics at the NUS Faculty of Science, introduced a multi-module optimsation framework for the planning and operation of electric bus (e-bus) shuttle fleets to reduce life cycle cost, and maximise savings on charger procurement, electricity, and battery degradation.
The framework was published in the journal IEEE Transactions on Intelligent Transportation Systems on 21 August 2024.
As part of the framework, the NUS team proposed a three-module model comprising:
· a vehicle scheduling module to determine e-bus deployment and trip assignments to ensure alignment with energy consumption and mitigate battery degradation;
· a charger deployment and charging planning module that determines the number of chargers to deploy at depots and across e-bus charging schedules to minimise life cycle costs; and
· an online charging scheduling module which updates charging schedules to handle uncertainties in trip energy consumption.
Her team’s work complements the focus on computational intelligence-based decision-making — especially in the context of large-scale electrical vehicle (EV) charging and integration with renewable power. With this proposed framework, they demonstrated a life cycle cost reduction of up to 38.2 per cent, facilitating up to a 90.2 per cent decrease in battery degradation cost.
“We’re working on how to manage EV charging at scale, especially for large fleets in cities, workplaces, or public charging hubs. The goal is to maximise the use of solar and wind power during EV charging, by aligning charging schedules with periods of high renewable energy generation. That way, we make the most of renewable energy and reduce stress on the grid,” said Prof Srinivasan.
The team is also developing algorithms to support Vehicle-to-Grid (V2G) technologies, allowing EVs not just to consume power, but to return it to the electrical grid when needed — turning EVs into mobile storage units that help stabilise the power system.
Beyond technologies -- towards consumer adoption
Integrating EVs and renewable energy into existing infrastructure is not just a technical challenge, it involves various stakeholders from industry partners to consumers. Prof Srinivasan stresses the importance of looking beyond infrastructure. For clean technologies to succeed, people need to understand, trust, and feel supported in adopting them.
“We must think about affordability, ease of use, and awareness. People need clear information, strong incentives, and policies that support their choices,” said Prof Srinivasan.
She added, “People need access to clear information, financial incentives, and reliable technology that fits seamlessly into their lives. Supportive policies and a strong focus on consumer behaviour and acceptance also play a key role in driving the transition to clean energy.”
Envisioning Singapore’s renewable energy future
Looking towards a sustainable future, Prof Srinivasan sees enormous potential in Singapore’s approach to energy innovation.
She envisions a future where renewable energy plays a central role in Singapore’s power system — enabled by smart tools, supported by strong policy, and integrated into everyday life. Prof Srinivasan highlighted that with land constraints, breakthroughs are needed in solar deployment, energy storage, and grid management.
At the heart of her work is a belief that technology, when designed thoughtfully and deployed strategically, can drive real change for a greener and more sustainable future with renewable energy.
“This work isn’t just about algorithms or software. It’s about building systems that support a cleaner, more resilient future, and making sure that the shift to renewables and electric mobility is not just possible, but practical,” said Prof Srinivasan.
As cities and countries plan for more e-buses, greener grids, and sustainable transport systems, Prof Srinivasan’s research offers a critical piece of the puzzle — ensuring we don’t just adopt clean technology, but do so intelligently, affordably, and equitably.
Banning #SkinnyTok won’t fix the problem of viral harm
By Dr Chew Han Ei, Senior Research Fellow at the Institute of Policy Studies at NUS
Writer Na Tien Piet, a Peranakan Chinese, skilled in verse
By Dr Azhar Ibrahim Alwee, Senior Lecturer at the Dept of Malay Studies, Faculty of Arts and Social Sciences at NUS
This health insurance clash is a chance to fix issues in Singapore's healthcare system
By Adjunct Assoc Prof Jeremy Lim from the Saw Swee Hock School of Public Health at NUS; Dr Taufeeq Wahab from NUHS; and Ms Sheryl Ha, an incoming student at Duke-NUS Medical School
The problem with social media is bigger than who gets access
Dr Chew Han Ei, Senior Research Fellow from the Institute of Policy Studies, Lee Kuan Yew School of Public Policy at NUS
Boosting solar efficiency: NUS researchers achieve record-setting perovskite tandem solar cell with novel NIR-harvesting molecule
Scientists at the National University of Singapore (NUS) have demonstrated a perovskite–organic tandem solar cell with a certified world-record power conversion efficiency of 26.4 per cent over a 1 cm2 active area — making it the highest-performing device of its kind to date. This milestone is driven by a newly designed narrow-bandgap organic absorber that significantly enhances near-infrared (NIR) photon harvesting, a long-standing bottleneck in thin-film tandem solar cells.
This latest research breakthrough was achieved under the leadership of Assistant Professor Hou Yi, who is a Presidential Young Professor in the Department of Chemical and Biomolecular Engineering under the College of Design and Engineering at NUS and leads the Perovskite-based Multijunction Solar Cells Group at the Solar Energy Research Institute of Singapore (SERIS) at NUS.
The NUS research team published their groundbreaking work in the prestigious scientific journal Nature on 25 June 2025.
Unlocking the promise of tandem solar cells
Perovskite and organic semiconductors both offer widely tunable bandgaps, enabling tandem cells to approach very high theoretical efficiencies. “Thanks to their light weight and flexible form factor, perovskite–organic tandem solar cells are ideally suited to power applications that are run directly on devices such as drones, wearable electronics, smart fabrics and other AI-enabled devices,” said Asst Prof Hou.
However, the absence of efficient NIR thin-film absorbers – which help to capture sunlight in the NIR region more efficiently and hence improving the overall efficiency of tandem cells - has kept perovskite–organic tandem cells lagging behind alternative designs.
Harnessing the near-infrared
To overcome this challenge, Asst Prof Hou and his team developed an asymmetric organic acceptor with an extended conjugation structure, enabling absorption deep into the NIR region while maintaining a sufficient driving force for efficient charge separation and promoting ordered molecular packing. Ultrafast spectroscopy and device physics analyses confirmed that this design achieves high free charge carrier collection with minimal energy loss.
Building on the organic subcell’s performance, the researchers stacked it beneath a high-efficiency perovskite top cell, interfacing the two layers with a transparent conducting oxide (TCO)-based interconnector.
The newly designed tandem cell achieved a power conversion efficiency of 27.5 per cent on 0.05-cm2 samples and 26.7 per cent on 1-cm2 devices, with the 26.4 per cent result independently certified. These findings mark the highest certified performance to date among perovskite–organic, perovskite–CIGS, and single-junction perovskite cells at comparable size.
“With efficiencies poised to exceed 30 per cent, these flexible films are ideal for roll-to-roll production and seamless integration onto curved or fabric substrates — think self-powered health patches that harvest sunlight to run onboard sensors, or smart textiles that monitor biometrics without the need for bulky batteries,” noted Asst Prof Hou.
Next step
In the next phase of their research, the NUS team will focus on enhancing real-world operational stability and advancing towards pilot-line manufacturing - crucial steps in bringing flexible, high-performance solar technology to market.
New leadership appointments
The National University of Singapore (NUS) has appointed Professor Tulika Mitra as the new Dean for the NUS School of Computing (NUS Computing), and Associate Professor Leong Ching as the Acting Dean for the Lee Kuan Yew School of Public Policy (LKYSPP). Both appointments will take effect from 1 July 2025. Assoc Prof Leong will hold the Acting Dean appointment concurrently with her role as Vice Provost (Student Life), while a Dean search is underway.
They succeed current Deans Professor Tan Kian Lee and Professor Danny Quah respectively, who are returning to their academic pursuits at NUS in research and teaching.
NUS President Professor Tan Eng Chye, said, “We are pleased to appoint Prof Tulika Mitra as Dean of the School of Computing and Assoc Prof Leong Ching as Acting Dean of the Lee Kuan Yew School of Public Policy while a search for the Dean is being carried out. With their extensive academic and industry experience, I am confident that their leadership will propel NUS Computing and LKYSPP towards higher levels of excellence, enabling us to be at the forefront of teaching, research and innovation while nurturing future-ready students with deep intellectual rigour and the resilience and adaptability to thrive in today’s digital economy.”
School of Computing
Prof Mitra has served as Vice Provost (Academic Affairs) since January 2021 and as the Chair of the University Promotion and Tenure Committee (UPTC) since May 2020. From fostering a more supportive and robust promotion and tenure culture to introducing the induction programmes for new faculty, she has been instrumental in identifying and recruiting top academic talents, upholding academic excellence, strengthening the Singaporean academic pipeline. She has created a direct pathway for Full Professorship through impactful educational leadership and incorporated clear guidelines for practice track promotions. Her contributions were recognised with the Singapore Public Administration Medal (Silver) in 2024.
Prof Mitra is a leading expert in hardware-software codesign of computing systems, specialising in real-time embedded systems and energy-efficient AI accelerators. Currently Provost’s Chair Professor in the Department of Computer Science, she has been with NUS Computing since 2001. She has served as the Editor-in-Chief of ACM Transactions on Embedded Computing Systems, Member of the ACM Publications Board, and General/Program Chair of many conferences. Additionally, she serves on the DSTA Board of Directors, Scientific Advisory Board of MPI-SWS, Barkhausen Institute, and international expert panels of the Chinese University of Hong Kong, INRIA France, and KTH Sweden.
On taking up her new appointment, Prof Mitra said, “I am honoured to lead the School at a time when our discipline is central to interdisciplinary innovations reshaping the modern world. I look forward to returning to my roots and working closely with the NUS Computing family of exceptional colleagues and students.”
Prof Mitra will concurrently take on the role of Vice Provost (Special Projects) in addition to helming the School of Computing. She will support the Deputy President (Academic Affairs) and Provost in leading high-impact strategic initiatives for the University.
Lee Kuan Yew School of Public Policy
Associate Professor Leong Ching joined LKYSPP in 2014 and was appointed to her current position in 2019.
An economist with a focus on applying institutional theory to the policy sciences, Assoc Prof Leong is today among the leading scholars in the field of behavioural public policy (BPP), with visiting appointments at the London School of Economics and Cambridge University.
She uses large field experiments to understand the motivating forces of government and public behaviour where her work has had significant impact on water policy and sustainability issues, as well as on public willingness to accept novelty such as recycled drinking water as a solution to global water scarcity. In the recent COVID-19 pandemic, her work on the willingness to accept new science in the form of mRNA vaccines has been cited by the World Health Organisation (WHO) as an important behavioural intervention to reduce hesitancy.
Beyond her contributions to public policy research and academic excellence, Assoc Prof Leong is committed to student growth and development. In her current role as Vice Provost (Student Life), she has overseen the integration of student life into the NUS curriculum.
In her new role as Acting Dean at LKYSPP, Assoc Prof Leong will build on LKYSPP’s strengths and track record in research, education and engagement, while fostering partnerships with government, industry and society. She will be supported by the current members of the LKYSPP Deanery who will continue to contribute towards leadership for the School.
Assoc Prof Leong said, “The Lee Kuan Yew School of Public Policy is built on a belief in the transformational power of policy ideas. I look forward to working with my colleagues to continue producing these global public goods so as to inform government and public decisions on the most pressing problems of our time.”
New Vice Provost (Academic Affairs)
Professor Ho Ghim Wei, who has been serving as Associate Provost (Academic Affairs), will succeed Prof Mitra as Vice Provost (Academic Affairs) from 1 July 2025. In her new role, she will lead the University’s efforts to nurture, develop and empower faculty members towards excellence in education, research and innovation.
At the Office of the Provost, Prof Ho has been supporting the Vice Provost in overseeing the academic review process and upholding standards of excellence in faculty career progression. She has been playing key roles in organising faculty development workshops and events, as well as in outreach and recruitment efforts.
Prof Ho is an outstanding scholar from the College of Design and Engineering (CDE), where she leads the Sustainable Smart Solar Systems research group. Her team conducts fundamental and applied research on nanosystems based on emerging low-dimensional nanomaterials, interfacial interactions and hybridised functionalities for applications in energy, the environment, electronics and healthcare. She also served as Vice Dean for Student Life at CDE for over four years.
In appreciation
Expressing his gratitude to the two outgoing deans for their service, President Tan said, “I am deeply grateful to Kian Lee and Danny for their leadership and service. A homegrown talent, Kian Lee is an outstanding researcher, data scientist and educator who has steered the faculty from a small department in its early years into one of the world’s top and highly competitive computing schools. Danny is an eminent economist who has enabled LKYSPP to strengthen and anchor its position as a global thought leader, advancing impactful policy solutions and training policy makers for Singapore, the region, and beyond.”
“As they return to academia, I look forward to their continued contributions to NUS in inspiring future generations of researchers and students to shape the future of technology and serve for the greater good of society,” President Tan added.
Under Prof Tan’s leadership, NUS Computing has flourished, consistently drawing in and nurturing the best and brightest talents. In tandem with the growing prominence and impact of Artificial Intelligence (AI), the School has expanded its curriculum offerings with the launch of three new AI-centric degree programmes, and the opening of Sea Building and Sea Connect – featuring new collaborative spaces for teaching and innovation, and home to 12 research labs which will catalyse long-term, fruitful collaborations between academia and industry.
These efforts are vital in driving NUS’ bold ambitions in cutting-edge research, education, and collaboration, advancing the rapidly evolving fields in computing such as AI and data science.
A distinguished global scholar, Prof Quah returned from London in 2016 and joined LKYSPP to pursue research in and contribute to policy development on international economic relations, income inequality, and economic growth. Over the past seven years as Dean of LKYSPP, he has been a visionary and transformative leader.
Under his stewardship, the School has strengthened its position as a leader and authority in public policy research and education, with distinct focus on Asia’s unique challenges and opportunities. Among the significant initiatives that expanded the School’s reach and impact are enhanced leadership training programmes; the Global-is-Asian platform, advancing research and collaboration across the Asia-Pacific and beyond; and the biennial Festival of Ideas, a flagship forum for policy dialogue bringing together experts and opinion leaders to address the most pressing issues of our time.
A prolific writer and sought-after speaker, Prof Quah has worked to push the frontiers of research in his field and, at leading international forums, drawn on that academic research to provide thought leadership in economic policy, global governance, and Asia’s role in international affairs.
Ushering in the Hijrah Year, Advancing Civilisation
By Dr Azhar Ibrahim Alwee, Senior Lecturer from the Dept of Malay Studies, Faculty of Arts and Social Sciences at NUS
How robust infrastructure will anchor JS-SEZ’s success
By Mr Tan Kway Guan, Research Associate and Principal Project Manager and Dr Yi Xin, Research Fellow, both from the Asia Competitiveness Institute, Lee Kuan Yew School of Public Policy at NUS
NUS retains 8th spot, NTU climbs to 12th in latest global university rankings
- The Straits Times, 19 June 2025, The Big Story, pA6
- The New Paper, 19 June 2025
- Tamil Murasu, 19 June 2025, p2
- CNA (TV News), 19 June 2025
- Channel 8, 19 June 2025
- CNA938, 19 June 2025
- Money 89.3FM, 19 June 2025
- Hao 96.3FM, 19 June 2025
- Warna 94.2FM, 19 June 2025
- Suria News Online, 19 June 2025
- Vasantham News Online, 19 June 2025
- Lianhe Zaobao, 20 June 2025, Opinion, p3
Mounting case against notion that boys are born better at math
Mounting case against notion that boys are born better at math
Elizabeth Spelke studies French testing data, finds no gender gap until instruction begins
Christy DeSmith
Harvard Staff Writer

Elizabeth Spelke.
Stephanie Mitchell/Harvard Staff Photographer
Twenty years ago, cognitive psychologist Elizabeth Spelke took a strong position in an ongoing public debate.
“There are no differences in overall intrinsic aptitude for science and mathematics among women and men,” the researcher declared.
A new paper in the journal Nature, written by Spelke and a team of European researchers, provides what she called “an even stronger basis for that argument.”
A French government testing initiative launched in 2018 provided data on the math skills of more than 2.5 million schoolchildren over five years. Analyses showed virtually no gender differences at the start of first grade, when students begin formal math education. However, a gap favoring boys opened after just four months — and kept growing through higher grades.
The results support previous research findings based on far smaller sample sizes in the U.S. “The headline conclusion is that the gender gap emerges when systematic instruction in mathematics begins,” summarized Spelke, the Marshall L. Berkman Professor of Psychology.
Back in 2005, her position was informed by decades of work studying sensitivity to numbers and geometry in the youngest members of human society.
“My argument was, ‘OK, if there really were biological differences, maybe we would see them in the infancy period,’” recalled Spelke, who laid out her evidence in a critical review for the journal American Psychologist that year.
“We were always reporting on the gender composition of our studies, as well as the relative performance of boys and girls,” Spelke continued. “But we were never finding any differences favoring either gender over the other.”
“The fact that there are no differences in infants could be because the abilities that show gender effects actually emerge during preschool.”
The possibility remained that differences in skill or even motivation surface later in the lifecycle.
“The fact that there are no differences in infants could be because the abilities that show gender effects actually emerge during preschool,” Spelke said.
Recent years have found the psychologist applying her research on early counting and numeral-recognition skills via educational interventions, all analyzed and refined through randomized control experiments.
One of the world’s most influential researchers on early learning, Spelke recently partnered with Esther Duflo, an MIT economics professor and Nobel laureate, to advise the Delhi office of the nonprofit Abdul Latif Jameel Poverty Action Lab (J-PAL). The group is working with the governments of four separate Indian states to develop and test math curricula for preschoolers, kindergartners, and first-graders.
Alongside her longtime collaborator, the cognitive neuroscientist Stanislas Dehaene, Spelke also serves as an adviser on the French Ministry of Education’s Scientific Council. The nationwide EvalAide language and math assessment was introduced with the council’s help in 2018. The project’s goal, Spelke explained, is establishing a baseline measure of every French child’s grasp of basic numeracy and literacy skills, while supporting the ministry in its commitment to implementing an evidence-based education for all French schoolchildren.
Spelke co-authored the Nature paper with Dehaene and eight other researchers, all based in France. Specifically analyzed were four consecutive cohorts of mostly 5- and 6-year-olds entering school between 2018 and 2021.
As in many countries, French girls tested slightly ahead of French boys on language as they started first grade in the fall. But the gender gap was close to null when it came to math.
“That definitely connects to the earlier issue of whether there’s a biological basis for these differences,” Spelke argued.
French first-graders were then reassessed after four months of school, when a small but significant math gap had emerged favoring boys. The effect quadrupled by the beginning of second grade, when schoolchildren were tested yet again.
“It was even bigger in fourth grade,” said Spelke, noting that French children are now assessed at the start of even-number grades. “And in sixth grade it was bigger still.”
For comparison, EvalAide results show the literacy gender gap was reduced by the first year’s four-month mark and changed far less as students progressed to higher grade levels.
Why would a gender gap widen on math specifically as students accumulated more time in school? According to Spelke, the paper provides “only negative answers” concerning ideas about innate sex differences and social bias.
“If there was really a pervasive social bias, and the parents were susceptible to it,” she said, “we would expect boys to be more oriented toward spatial and numerical tasks when they first got to school.”
Delving further into the data yielded more results that caught the researchers’ interest. For starters, Spelke’s co-authors could disaggregate the findings by month of birth, with the oldest French first-graders turning 7 in January — nearly a year before their youngest classmates. The math gap was found to correlate not with age, but with the number of months spent in school.
Another noteworthy result concerned the COVID-19 pandemic, which wiped out the last 2.5 months of first grade for children who enrolled in fall 2019. “With less time in school, the amount of the gender gap grew by less than it did in the other years where there wasn’t a long school closure,” Spelke said.
The 2019 cohort yielded one more striking result. Earlier that year, French schoolkids had placed at the very bottom of 23 European countries on the quadrennial Trends in International Mathematics and Science Study. That sparked a national conversation: How could France, birthplace of the great René Descartes, be trailing its peers in mathematics?
In May 2019, the French Education Ministry, with the support of its Scientific Council, called for the introduction of more math curriculum during kindergarten. For the first time, an ever-so-slight gender math gap appeared that fall for those entering first grade. It hadn’t been there in 2018 but remained detectable in results from the 2020 and 2021 cohorts.
The overall results, the most conclusive to date, suggest it’s time to shelve explanations based on biology or bias. Instead, it appears there’s something about early math instruction that produces gender disparities.
“We still don’t know what that is exactly,” said Spelke, who plans to spend much of her 2025-26 sabbatical year in France. “But now we have a chance to find out by randomized evaluations of changes to the curriculum.”
Forecasting the next variant
Forecasting the next variant

Professor Eugene Shakhnovich (from left), Dianzhuo (John) Wang, and Vaibhav Mohanty worked together on the studies.
Veasey Conway/Harvard Staff Photographer
Yahya Chaudhry
Harvard Correspondent
Harvard team fuses biophysics and AI to predict viral threats
When the first reports of a new COVID-19 variant emerge, scientists worldwide scramble to answer a critical question: Will this new strain be more contagious or more severe than its predecessors? By the time answers arrive, it’s frequently too late to inform immediate public policy decisions or adjust vaccine strategies, costing public health officials valuable time, effort, and resources.
In a pair of recent publications in Proceedings of the National Academy of Sciences (PNAS), a research team in the Department of Chemistry and Chemical Biology combined biophysics with artificial intelligence to identify high-risk viral variants in record time — offering a transformative approach for handling pandemics. Their goal: to get ahead of a virus by forecasting its evolutionary leaps before it threatens public health.
“As a society, we are often very unprepared for the emergence of new viruses and pandemics, so our lab has been working on ways to be more proactive,” said senior author Eugene Shakhnovich, Roy G. Gordon Professor of Chemistry. “We used fundamental principles of physics and chemistry to develop a multiscale model to predict the course of evolution of a particular variant and to predict which variants will become dominant in populations.”
The studies detail approaches for forecasting the viral variants most likely to become public health risks and for accelerating experimental validation. Together, these advances reshape both the prediction and detection of dangerous viral variants, setting a template for broader applications.
These studies were led by members of Shakhnovich’s lab, including co-authors Dianzhuo (John) Wang and Vaibhav Mohanty, both Ph.D. students in the Harvard Kenneth C. Griffin Graduate School of Arts and Sciences, and Marian Huot, a visiting student from École Normale Supérieure.
“This framework doesn’t just help us track variants — it helps us get ahead of them.”
Marian Huot, visiting student and co-author
“Our work has focused on the spike protein of COVID-19, analyzing how its mutations change viral fitness and immune evasion,” said Wang. “Given that COVID-19 is the most extensively documented pandemic to date, we saw an opportunity to develop models that not only understand viral evolution, but also anticipate which mutations are likely to pose the greatest threat.”
The first study introduced a model that quantitatively linked biophysical features — such as the spike protein’s binding affinity to human receptors and its ability to evade antibodies — to a variant’s likelihood of surging in global populations. By incorporating a complex, yet essential factor called epistasis (where the effect of one mutation hinges on another), the model overcame a key limitation of previous approaches that struggle to make accurate predictions.
“Evolution isn’t linear — mutations interact, sometimes unlocking new pathways for adaptation,” Shakhnovich said. “Factoring these relationships allowed us to forecast the emergence of dominant variants ahead of epidemiological signals.”
Building on these insights, the companion study introduces VIRAL (Viral Identification via Rapid Active Learning), a computational framework that combines the biophysical model with artificial intelligence to accelerate the detection of high-risk SARS-CoV-2 variants. By analyzing potential spike protein mutations, it identified those likeliest to enhance transmissibility and immune escape.
“At the start of a pandemic, when experimental resources are scarce, we can’t afford to test every possible mutation,” Wang said. “VIRAL uses artificial intelligence to focus lab efforts on the most concerning candidates — dramatically accelerating our ability to identify the variants that could drive the next wave.”
The implications of this research are far-reaching. Simulations show that the VIRAL framework can identify high-risk SARS-CoV-2 variants up to five times faster than conventional approaches, while requiring less than 1 percent of experimental screening effort. This dramatic gain in efficiency could significantly accelerate early outbreak response.
“This framework doesn’t just help us track variants — it helps us get ahead of them,” said Huot. “By identifying high-fitness variants before they appear in the population, we can inform vaccine design strategies that anticipate, not just react to, emerging threats.”
A defining feature of this work is its interdisciplinary scope, with the international Harvard team bringing together fields of molecular biophysics, artificial intelligence, and virology to deepen our understanding of rapidly evolving viral threats.
“By uniting physics-driven modeling and machine learning, we’re introducing a predictive framework for viral evolution with broad potential,” Shakhnovich said. “We’re eager to see how this strategy might extend beyond infectious diseases into areas like cancer biology.”
Looking ahead, the team aims to adapt and scale the framework for broader use, targeting challenges such as other emerging viruses and rapidly evolving tumor cells. They emphasize that combining physical modeling with AI could shift the paradigm from reactive tracking to proactive biological forecasting.
“In a world where biological threats are constantly evolving, earlier warning and smarter tools are essential,” Wang said. “Our ultimate goal is to create a platform — one that gives scientists and policymakers a head start not just in future pandemics, but in tackling fast-evolving challenges across biology.” added Huot.
Shakhnovich credited grants from the National Institutes of Health for enabling exploratory research to benefit public health. Basic science and future breakthroughs are in grave danger due to Washington’s cuts to scientific research, Shakhnovich warned.
“Our research has the potential to help all of humankind to solve some serious health problems,” Shakhnovich said. “It would not have been possible without federal funding that looks for long-term benefits.”
MIT and Mass General Hospital researchers find disparities in organ allocation
In 1954, the world’s first successful organ transplant took place at Brigham and Women’s Hospital, in the form of a kidney donated from one twin to the other. At the time, a group of doctors and scientists had correctly theorized that the recipient’s antibodies were unlikely to reject an organ from an identical twin. One Nobel Prize and a few decades later, advancements in immune-suppressing drugs increased the viability of and demand for organ transplants. Today, over 1 million organ transplants have been performed in the United States, more than any other country in the world.
The impressive scale of this achievement was made possible due to advances in organ matching systems: The first computer-based organ matching system was released in 1977. Despite continued innovation in computing, medicine, and matching technology over the years, over 100,000 people in the U.S. are currently on the national transplant waiting list and 13 people die each day waiting for an organ transplant.
Most computational research in organ allocation is focused on the initial stages, when waitlisted patients are being prioritized for organ transplants. In a new paper presented at ACM Conference on Fairness, Accountability, and Transparency (FAccT) in Athens, Greece, researchers from MIT and Massachusetts General Hospital focused on the final, less-studied stage: organ offer acceptance, when an offer is made and the physician at the transplant center decides on behalf of the patient whether to accept or reject the offered organ.
“I don’t think we were terribly surprised, but we were obviously disappointed,” co-first author and MIT PhD student Hammaad Adam says. Using computational models to analyze transplantation data from over 160,000 transplant candidates in the Scientific Registry of Transplant Recipients (SRTR) between 2010 and 2020, the researchers found that physicians were overall less likely to accept liver and lung offers on behalf of Black candidates, resulting in additional barriers for Black patients in the organ offer acceptance process.
For livers, Black patients had 7 percent lower odds of offer acceptance than white patients. When it came to lungs, the disparity became even larger, with 20 percent lower odds of having an offer acceptance than white patients with similar characteristics.
The data don’t necessarily point to clinician bias as the main influence. “The bigger takeaway is that even if there are factors that justify clinical decision-making, there could be clinical conditions that we didn’t control for, that are more common for Black patients,” Adam explains. If the wait-list fails to account for certain patterns in decision-making, they could create obstacles in the process even if the process itself is “unbiased.”
The researchers also point out that high variability in offer acceptance and risk tolerances among transplant centers is a potential factor complicating the decision-making process. Their FAccT paper references a 2020 paper published in JAMA Cardiology, which concluded that wait-list candidates listed at transplant centers with lower offer acceptance rates have a higher likelihood of mortality.
Another key finding was that an offer was more likely to be accepted if the donor and candidate were of the same race. The paper describes this trend as “concerning,” given the historical inequities in organ procurement that have limited donation from racial and ethnic minority groups.
Previous work from Adam and his collaborators has aimed to address this gap. Last year, they compiled and released Organ Retrieval and Collection of Health Information for Donation (ORCHID), the first multi-center dataset describing the performance of organ procurement organizations (OPOs). ORCHID contains 10 years’ worth of OPO data, and is intended to facilitate research that addresses bias in organ procurement.
“Being able to do good work in this field takes time,” says Adam, who notes that the entirety of the organ offer acceptance project took years to complete. To his knowledge, only one paper to date studies the association between offer acceptance and race.
While the bureaucratic and highly interdisciplinary nature of clinical AI projects can dissuade computer science graduate students from pursuing them, Adam committed to the project for the duration of his PhD in the lab of associate professor of electrical engineering Marzyeh Ghassemi, an affiliate of the MIT Jameel Clinic and the Institute of Medical Engineering and Sciences.
To graduate students interested in pursuing clinical AI research projects, Adam recommends that they “free [themselves] from the cycle of publishing every four months.”
“I found it freeing, to be honest — it’s OK if these collaborations take a while,” he says. “It’s hard to avoid that. I made the conscious choice a few years ago and I was happy doing that work.”
This work was supported with funding from the MIT Jameel Clinic. It was also supported, in part, by Takeda Development Center Americas Inc. (successor in interest to Millennium Pharmaceuticals Inc.), an NIH Ruth L. Kirschstein National Research Service Award, a CIFAR AI Chair at the Vector Institute, and by the National Institutes of Health.
© Image: Alex Ouyang/MIT Jameel Clinic
MIT and Mass General Hospital researchers find disparities in organ allocation
In 1954, the world’s first successful organ transplant took place at Brigham and Women’s Hospital, in the form of a kidney donated from one twin to the other. At the time, a group of doctors and scientists had correctly theorized that the recipient’s antibodies were unlikely to reject an organ from an identical twin. One Nobel Prize and a few decades later, advancements in immune-suppressing drugs increased the viability of and demand for organ transplants. Today, over 1 million organ transplants have been performed in the United States, more than any other country in the world.
The impressive scale of this achievement was made possible due to advances in organ matching systems: The first computer-based organ matching system was released in 1977. Despite continued innovation in computing, medicine, and matching technology over the years, over 100,000 people in the U.S. are currently on the national transplant waiting list and 13 people die each day waiting for an organ transplant.
Most computational research in organ allocation is focused on the initial stages, when waitlisted patients are being prioritized for organ transplants. In a new paper presented at ACM Conference on Fairness, Accountability, and Transparency (FAccT) in Athens, Greece, researchers from MIT and Massachusetts General Hospital focused on the final, less-studied stage: organ offer acceptance, when an offer is made and the physician at the transplant center decides on behalf of the patient whether to accept or reject the offered organ.
“I don’t think we were terribly surprised, but we were obviously disappointed,” co-first author and MIT PhD student Hammaad Adam says. Using computational models to analyze transplantation data from over 160,000 transplant candidates in the Scientific Registry of Transplant Recipients (SRTR) between 2010 and 2020, the researchers found that physicians were overall less likely to accept liver and lung offers on behalf of Black candidates, resulting in additional barriers for Black patients in the organ offer acceptance process.
For livers, Black patients had 7 percent lower odds of offer acceptance than white patients. When it came to lungs, the disparity became even larger, with 20 percent lower odds of having an offer acceptance than white patients with similar characteristics.
The data don’t necessarily point to clinician bias as the main influence. “The bigger takeaway is that even if there are factors that justify clinical decision-making, there could be clinical conditions that we didn’t control for, that are more common for Black patients,” Adam explains. If the wait-list fails to account for certain patterns in decision-making, they could create obstacles in the process even if the process itself is “unbiased.”
The researchers also point out that high variability in offer acceptance and risk tolerances among transplant centers is a potential factor complicating the decision-making process. Their FAccT paper references a 2020 paper published in JAMA Cardiology, which concluded that wait-list candidates listed at transplant centers with lower offer acceptance rates have a higher likelihood of mortality.
Another key finding was that an offer was more likely to be accepted if the donor and candidate were of the same race. The paper describes this trend as “concerning,” given the historical inequities in organ procurement that have limited donation from racial and ethnic minority groups.
Previous work from Adam and his collaborators has aimed to address this gap. Last year, they compiled and released Organ Retrieval and Collection of Health Information for Donation (ORCHID), the first multi-center dataset describing the performance of organ procurement organizations (OPOs). ORCHID contains 10 years’ worth of OPO data, and is intended to facilitate research that addresses bias in organ procurement.
“Being able to do good work in this field takes time,” says Adam, who notes that the entirety of the organ offer acceptance project took years to complete. To his knowledge, only one paper to date studies the association between offer acceptance and race.
While the bureaucratic and highly interdisciplinary nature of clinical AI projects can dissuade computer science graduate students from pursuing them, Adam committed to the project for the duration of his PhD in the lab of associate professor of electrical engineering Marzyeh Ghassemi, an affiliate of the MIT Jameel Clinic and the Institute of Medical Engineering and Sciences.
To graduate students interested in pursuing clinical AI research projects, Adam recommends that they “free [themselves] from the cycle of publishing every four months.”
“I found it freeing, to be honest — it’s OK if these collaborations take a while,” he says. “It’s hard to avoid that. I made the conscious choice a few years ago and I was happy doing that work.”
This work was supported with funding from the MIT Jameel Clinic. It was also supported, in part, by Takeda Development Center Americas Inc. (successor in interest to Millennium Pharmaceuticals Inc.), an NIH Ruth L. Kirschstein National Research Service Award, a CIFAR AI Chair at the Vector Institute, and by the National Institutes of Health.
© Image: Alex Ouyang/MIT Jameel Clinic
Research at risk: after-school nutrition and career readiness for NYC middle-schoolers
Does densification lead to more heat stress in cities?
Highly sensitive science
Highly sensitive science

Veasey Conway/Harvard Staff Photographer
Sy Boles
Harvard Staff Writer
David Ginty probes pleasure and pain to shed light on autism, other conditions
The itch of a clothing tag. The seam on the inside of a sock. The tickling of hairs on the back of your neck. For many of us, it’s easy to tune out these sensations as we move through the day. But for some autistic people, everyday sensations can be intolerable.
David Ginty knows why, and it’s not, as many autism researchers once believed, a dysfunction of the brain.
Ginty, the Edward R. and Anne G. Lefler Professor of Neurobiology and chair of the Department of Neurobiology at Harvard Medical School, studies touch and pain. Scientists have known for some time, he said, that our experience of physical sensation is a collaboration between our brain, our central nervous system, and sensory neurons. But the mechanisms behind that collaboration have remained a mystery, and the quest for an answer has major implications for our ability to treat everything from chronic pain to autistic hypersensitivity to sexual dysfunction.
“The auditory system cares about sound waves in a particular frequency range,” Ginty said. “The visual system, similarly, only cares about a narrow band of the visual light range. But the somatosensory system cares about tactile stimuli, thermal stimuli, chemical stimuli, proprioception — where your body and limbs are in space and time, as well as the state of many of our body organs.
“And then there’s an affective component, an emotional component of touch, which in itself is a huge burgeoning area that is interesting. How does touch trigger an emotional response? Somatosensation is incredibly rich and multidimensional.”
“We’re looking hard to find non-opioid approaches to treat pain and we’ve identified many potential approaches.”
About 10 years ago, Ginty and his team found that in animal models of autism spectrum disorder, the locus of sensory dysfunction was not the brain, as had been thought, but the spinal cord and periphery. The key players are second-order neurons in the spinal cord that function like a mixing board’s gain or volume control, amplifying or dampening sensations as they travel from the skin and other sensory organs to the brain. In some ASD models, these second-order neurons appeared to be stuck on high, leading to sensory overload.
“It made us realize that we could potentially treat sensory over-reactivity by turning down the activity of sensory neurons, or sensory neuron responsiveness, in the peripheral nervous system,” he said.
The most logical approach would be to use drugs that turn down sensory neuron activity. Lauren Orefice, then a postdoc in the Ginty lab, thought that benzodiazepines could be used to silence nerve cells in the periphery to help reduce sensory over-reactivity. But pediatricians are reluctant to prescribe potentially addictive sedatives to their patients.
“So one approach that we’ve been trying to take is to develop peripherally restricted benzodiazepines that can reduce the activity of neurons in the peripheral nervous system without penetrating the brain, and therefore without sedating side effects,” Ginty said.
For children with autistic hypersensitivity, such a drug could be life-changing. It could reduce overstimulation, lower anxiety, prevent meltdowns, and let them experience a hug as a pleasure rather than a source of pain.

Pacinian corpuscles — neurons that sense vibrations — are delicate enough to pick up someone’s footsteps on the other side of the room.
Image by Zoe Sarafis
The implications of Ginty’s work on the systems underlying pleasure and pain extend far beyond autism research. The somatosensory system is made of some 20 types of neurons tucked into every imaginable part of the body: the base of our hair follicles, the crevasses of our dermis, in our muscles and joints — anywhere that detects variations of stretch, pressure, vibration, temperature, and even our position in space. If he had to pick a favorite neuron, he’d pick two: Pacinian corpuscles and nociceptors.
Pacinian corpuscles sense vibrations. They’re delicate enough to pick up someone’s footsteps on the other side of the room, and impactful enough to make us cry when music moves through our body. Nociceptors pick up on noxious stimuli — or, in plain English, pain.
“We’re figuring out how nociceptors are connected in the central nervous system to give rise to reflexes, like quickly removing your hand from a hot stove, or to the emotional component of pain,” Ginty said. “These are truly amazing neurons. They have very high thresholds, unlike the Pacinian corpuscles, which respond just to tiny tweaks or vibrations of the skin. The nociceptors only fire an electrical impulse when you have a damaging encounter.”
New genetic tools allow Ginty to understand how nociceptors connect to the central nervous system and identify every protein that nociceptors express, unlocking a new range of potential drug targets. “Right now, opioids are the best remedy we have for many types of pain, and that’s really gotten us into trouble,” he said. “We’re looking hard to find non-opioid approaches to treat pain and we’ve identified many potential approaches by targeting the nociceptors themselves.”
Ginty’s lab is not set up for drug development. But the research done in his lab forms the groundwork that the pharmaceutical industry needs to create treatments that improve lives. Ginty’s research is often exploratory, he said. It’s not always clear whether or how a certain experiment will translate into a therapy or marketable drug, which is why industry funding is rarely sufficient. It’s federal grants that have supported the fundamental science, which, in the long run, lead to cures.
Ginty has had two grants frozen in the Trump administration’s dispute with Harvard. The first, a partnership with Clifford Woolf at Boston Children’s Hospital, was exploring how pain stimuli in the skin, joints, and bone are propagated into the spinal cord and conveyed to the brain, and where in the brain those signals go.
The second was a prestigious R35 grant, sometimes called the Outstanding Investigator Award, which provides flexible, long-term funding to established investigators to allow them to pursue particularly innovative research. It was meant to cover the bulk of Ginty’s work for eight years, but it was eliminated just one year in.
The most devastating part of the cancellations, he said, is that they come at a time of unparalleled progress in neurobiology.
“The advances are just breathtaking because of the alchemy of bringing together genetics and physiology and molecular biology, the knowledge that is being unveiled. At no other time in history have the advances been so rapid and so large as the time we’re in now. I feel fortunate to be in this position and to play a part in discovering how the nervous system works and new therapeutic opportunities. We need to find ways to survive the current funding crisis so that progress that leads to new treatments for disorders of the nervous system can continue.”
Also in this series:
-
What might cancer treatment teach us about dealing with retinal disease?
Joan Miller’s innovative thinking led to therapies for macular degeneration that have helped millions, made her better leader
-
Let’s not send low-income students back to the ’80s
Financial aid red tape nearly derailed Susan Dynarski’s undergrad dreams. Now she sees decades of progress under threat.
-
We know exercise is good for you. Why? He‘s working on it.
Expanding on decades of research, a new study seeks to pinpoint movement’s molecular benefits
-
Things money can’t buy — like happiness and better health
That’s according to the Harvard Study of Adult Development, which over its 87-year run has generated data that benefits work on other issues
-
Tips for staying alive, decades in the making
JoAnn Manson has spent her career researching – and highlighting – how everyday choices influence health
-
How just a fishing expedition helped lead to GLP-1
Story of game-changing therapy illustrates crucial role of fundamental research breakthroughs
-
-
Rewriting genetic destiny
David Liu, Breakthrough Prize recipient, retraces path to an ‘incredibly exciting’ disease fighter: ‘This is the essence of basic science.’
-
Long trail from 1992 discovery to 2024 Nobel
Gary Ruvkun recounts years of research, which gradually drew interest, mostly fueled by NIH grants
Taking the measure of legal pot

AP photos
Was legal pot a good idea?
Researchers detail what we know about impact on revenue and health — and what we still need to find out
Saima Sidik
Harvard Correspondent
In Massachusetts, getting stoned gets easier all the time.
Since the Commonwealth legalized recreational cannabis in 2016, dispensaries have proliferated, the price of cannabis has dropped by more than half, and the potency of pot has shot up. All told, cannabis has become big business in Massachusetts, with the industry raking in more than $1.64 billion last year. Other states have seen similar trends.
Some supporters of legalization envisioned a new era of personal freedom, with easy access to a plant they touted as a healthier alternative to alcohol. Tax revenue from cannabis sales would fund valuable state projects, and legalization would alleviate a burden on the justice system, they said.
Almost a decade later, we asked four researchers to weigh in on how those hopes line up against the reality of marijuana legalization. Interviews have been edited for clarity and length.

Kevin P. Hill
Associate Professor of Psychiatry at Harvard Medical School
Director of Addiction Psychiatry at Beth Israel Deaconess Medical Center
Legalizing marijuana has created a huge new revenue stream for the state — over $920 million according to the Marijuana Policy Project. And I think that’s great. But in my eyes, that revenue has come at a great cost to public health.
My colleagues and I are treating more and more people who have developed cannabis use disorder, which is when cannabis use interferes with key spheres in one’s life such as work, school, or relationships. That’s not surprising when you consider that the number of daily or near-daily cannabis users has increased 20-fold over the last three decades.
On top of that, cannabis is far more potent today than it was in decades past. This trend started before legalization, but when big businesses got involved in cultivation, they had the means to really drive up the THC content — that’s the component of the plant that makes a user feel high. In the 1960s, ’70s, and ’80s, the average THC content of cannabis was around 3 percent or 4 percent. Now you can find cannabis flower that’s 20 percent to 30 percent or even higher.
In the 1960s, ’70s, and ’80s, the average THC content of cannabis was around 3 percent or 4 percent. Now you can find cannabis flower that’s 20 percent to 30 percent.
That jump in potency has led to a significant increase in the number of adult users who develop cannabis use disorder, from 10 percent just 10 years ago to around 30 percent today. It’s hard to say how much of this trend can be attributed to legalization, but I think legalization has probably pushed it along that much faster.
The problem, I think, comes down to the difference between ideas and implementation. We need to couple increasing cannabis use to additional research so that we can mitigate the harms done by the drug while still maintaining the positive aspects of legalization. For example, we need more research on how law enforcement can prevent people from driving under the influence of cannabis, which can lead to dangerous situations. Likewise, we need people to research the medical benefits of cannabis (and yes, in certain circumstances the drug does seem to have bona fide medical benefits). Ideally, we’ll find ways for people to take advantage of those benefits while keeping their risk of developing a substance use disorder — or other problems that might result from cannabis use — low.
I would love to see the states and private companies that are benefiting from cannabis sales put more money toward research so that cannabis science can keep pace with interest in cannabis.

Peter Grinspoon
Instructor in Medicine at Harvard Medical School
Author of ‘Seeing Through the Smoke: A Cannabis Specialist Untangles the Truth about Marijuana’
I think cannabis legalization has been a tremendous success overall. Certainly there are things that could still be better, but we’ve made great progress in a few key areas.
First, cannabis causes far fewer people to get arrested these days than it did in decades past. Arrests for possession dropped over 70 percent between 2010 and 2018. And that’s great because having an arrest on your record can impact your education, your housing, your employment, everything. It’s awful. Things aren’t perfect; arrests have not gone down to zero, and Black people are still arrested at higher rates than white people. But the drug isn’t clogging up the justice system as much as it used to.
Second, by creating a legal market we’ve made sure people have access to safe cannabis as opposed to an illicit product that may be contaminated with pesticides or mold or heavy metals. Of course, not everybody buys through the legal market, and that’s something we could still work on.
And lastly, we’re generating tax revenue for the state, which is a huge win. At this point, the state is actually generating more tax revenue from cannabis than it is from alcohol.
Nobody credible is arguing for a return to cannabis prohibition, and I think that’s a testament to the overall success we’ve had in legalizing it.
Are there still problems to be solved? Yes, absolutely! One of the biggest problems is that accidental overconsumption is becoming more common. That’s for two reasons. The first is that products are so much stronger than they used to be. People take the same three bong hits they took back in college not realizing that today that’s the equivalent of taking about seven times what they used to.
And second, we’re stupid enough to make cannabis into gummies and chocolates. Kids will eat these if they’re left out in the open, and that can send them to the ER. But it’s not just kids — adults are also prone to overconsumption when cannabis is made to taste good. I’m firmly opposed to turning cannabis into candy — or pizza sauce or hot sauce or any other type of food — and I’ve been blowing this horn for a long time.
Just like criminalizing cannabis, legalizing it is a social experiment. We need to monitor the situation carefully because there could be risks that we haven’t even thought of. But as far as I know, nobody credible is arguing for a return to cannabis prohibition, and I think that’s a testament to the overall success we’ve had in legalizing it.

Michael Flaherty
Assistant Professor of Pediatrics, Harvard Medical School
Pediatric Critical Care Physician, Massachusetts General Hospital
As the director of MGH’s pediatric injury prevention program, my interest in cannabis legalization lies in how it impacts the safety of children. And accidental cannabis exposure can definitely be a threat to child safety.
When my colleagues and I used public health data to study the frequency with which cannabis sends kids to the emergency room, we found that these visits increased by about 60 percent after recreational dispensaries opened. Most of the exposures (over 80 percent) have been in teenagers, but the biggest increases were in younger kids. In the zero-to-5 age group, we saw about a fourfold increase, and in the 6-to-12 age group, a sevenfold increase.
Because they’re small, kids will be more severely affected by cannabis than an adult who takes the same amount. In fact, cannabis can make kids so sleepy they start to have trouble breathing. To complicate matters, the same thing can happen if kids ingest a number of prescription drugs or if they get meningitis or encephalitis. Cannabis consumption is rarely if ever fatal, but those other conditions can definitely be fatal. Unless someone saw the kid eat cannabis, we often don’t know what’s going on — or how bad the situation is — until we test for everything plus the kitchen sink. It puts a lot of strain on the system.
It’s really important that parents keep cannabis well secured, and we also need to put a call out to manufacturers and retailers: Make the packaging child-proof!
Cannabis has also had some positive effects for pediatric medicine. In particular, a component of cannabis called cannabidiol has been quite successful for treating epilepsy in children who don’t respond to other seizure medications. I don’t really have an opinion on whether the pros of cannabis legalization outweigh the cons. But as a pediatrician, it’s my job to advocate for children and to protect them from the unintended consequences of voters’ actions, and right now that means educating people about the dangers of accidental consumption.
Toddlers in their exploratory phase are especially likely to eat cannabis if it’s left lying around. It’s really important that parents keep cannabis well secured so that kids can’t access it, and we also need to put a call out to manufacturers and retailers: Make the packaging child-proof! These products should be a little more difficult for a child as young as 3 or 4 to open.

Carmel Shachar
Assistant Clinical Professor at Harvard Law School
Faculty Director of the Health Law and Policy Clinic at Harvard Law School
One of the strangest aspects of cannabis legalization is that the drug is only legal in state policy. At a federal level, cannabis is still criminalized. And part of why it’s criminalized is because cannabis is classified as a Schedule I substance, which means it’s addictive and it has no medical use.
At this point it seems clear that cannabis does have medical value, for example for relieving certain types of pain and to reduce nausea during chemotherapy. The Schedule I classification was a response to the cultural perception people had of cannabis during the 1960s, and the designation was made without much scientific evidence to back it up. Unfortunately, with cannabis illegal on a federal level, it’s difficult for scientists to research the plant’s legitimate medical uses because they can’t use federal funds. And because cannabis is a natural product and can’t be patented, private industry isn’t very interested in researching it.
Many people — myself included — hope that cannabis will be reclassified as a Schedule III drug, which would clear some of these roadblocks to research.
Many people — myself included — hope that cannabis will be reclassified as a Schedule III drug, which would clear some of these roadblocks to research. Legalization at the state level sets a precedent for reclassifying cannabis because it shows that even with millions of people now having access, the sky has not fallen. But at this point, we still haven’t achieved the reclassification that we’re hoping for.
During the Biden administration, there was interest in rescheduling cannabis, but the process has a lot of twists and turns, and it wasn’t completed before the new administration took office. Now rescheduling appears to be on pause. The motion is parked in front of a Drug Enforcement Agency administrative law judge who seems to be skeptical of its value.
When the falcons come home to roost
When the falcons come home to roost

A nest cam has been installed to livestream a pair of peregrine falcons atop the Memorial Hall tower.
Photos by Stephanie Mitchell/Harvard Staff Photographer
Eileen O’Grady
Harvard Staff Writer
Peregrines have rebounded since DDT era and returned to Memorial Hall. Now new livestream camera offers online visitors front row seat of storied perch.
A new wildlife camera mounted on Memorial Hall is giving online visitors an up-close glimpse of a peregrine falcon nesting site with a storied history.
The FAS installed the Peregrine Falcon Cam this spring on the east side of the tower, facing the rooftop nest box. There have been frequent sightings of two falcons, one male and one female, who appear throughout the day to eat, preen, and rest when they aren’t hunting.
“Buildings are natural canyons for them,” said Brian Farrell, Monique and Philip Lehner Professor for the Study of Latin America, professor of biology, and curator of entomology in the Museum of Comparative Zoology. “They’re like cliffsides, and they have loads of starlings and pigeons around, so plenty of food. They like the high perches because they hunt only birds in flight, and only over open spaces.”
The Memorial Hall site has a long history — late pioneering biologist and Harvard professor emeritus Edward O. Wilson observed peregrine falcons nesting there as a Ph.D. student in 1955. But the U.S. peregrine population was decimated by the pesticide DDT in the mid-20th century, and none of the birds of prey were seen on Harvard’s campus for years.
Ray Traietti, director of administration in the Office for the Arts and former building manager of Memorial Hall, realized the birds had returned one day in 2014. He was walking into work when a severed starling head dropped at his feet.
“I started noticing pieces of dead birds all around. I was like, ‘Oh, this is kind of odd,’” Traietti recalled. Officials from the Massachusetts Division of Fisheries and Wildlife would later discover that the falcon eggs laid on the rubber roof of Memorial Hall weren’t viable that year, likely due to exposure to the elements.

State officials installed the box the next year to protect future nests. The three-sided design allows the fastest birds on Earth to leave the nest with their signature move: a dive that can reach 200 mph.
“They need open space to launch themselves,” Farrell said. “They don’t flap and go up vertically like birds with broader wings can do. They just take off like fighter jets off an aircraft carrier.”
In the spring of 2021 a pair of falcons successfully hatched and fledged three chicks — the first known to hatch on Memorial Hall since the 1950s.
“I think, to E.O. Wilson’s point, there’s something about that location that works for them,” Traietti said. “To think that after the nationwide decimation of DDT, that they went back to that same spot, is pretty remarkable.”
“The world seems more chaotic every day, but here’s something that’s beautiful and pure and continuing on.”
Brian Farrell
After the federal DDT ban in the 1970s, a reintroduction effort followed, and the falcon population has slowly increased. Previously designated endangered in Massachusetts, they were moved to the less critical “special concern” category in 2019. As of 2020, there are at least 46 nesting pairs in the state.
“These animals are living in a pretty human ecosystem, and they’re thriving,” Traietti said. “When you see them up there, it’s a testament to coexistence.”
The Memorial Hall falcons have been a rotating cast, but one familiar face keeps returning. Fellsway (banded with the number 79/CB) has nested at Harvard for the past three years. He was found injured in Medford and rehabilitated at the Tufts Wildlife Clinic in 2021.
He raised three chicks on Memorial Hall in spring 2023 with an unbanded female (Traietti calls her “Athena,” after the stained-glass window in Sanders Theatre). Fellsway and Athena returned in 2024 and raised four more chicks.
But in a surprising turn this year, Fellsway returned with a new mate: Letitia, identified by leg band 28/BV, who was previously seen nesting at Boston University. According to Farrell, she and Fellsway haven’t laid eggs at Memorial Hall — likely because their bond is new, though also possibly because Letitia hatched a brood with another male at BU just last month.
“It’s an interesting and complex drama of pairings and places,” Farrell said. “It’s a little bit confusing trying to keep track of these guys and figure out who’s who, because they’re almost indistinguishable as adults. You have to get a good enough photograph that you can read the band.”
This fall, Traietti says, a new nest box will be installed. He and Farrell are hopeful that there will be a nest next spring.
In the meantime, Farrell said he is glad the Falcon Cam can help the Harvard community feel connected to the fierce, powerful birds living just overhead.
“It’s really about outreach and sharing science with the world,” Farrell said. “The world seems more chaotic every day, but here’s something that’s beautiful and pure and continuing on.”
Confronting the AI/energy conundrum
The explosive growth of AI-powered computing centers is creating an unprecedented surge in electricity demand that threatens to overwhelm power grids and derail climate goals. At the same time, artificial intelligence technologies could revolutionize energy systems, accelerating the transition to clean power.
“We’re at a cusp of potentially gigantic change throughout the economy,” said William H. Green, director of the MIT Energy Initiative (MITEI) and Hoyt C. Hottel Professor in the MIT Department of Chemical Engineering, at MITEI’s Spring Symposium, “AI and energy: Peril and promise,” held on May 13. The event brought together experts from industry, academia, and government to explore solutions to what Green described as both “local problems with electric supply and meeting our clean energy targets” while seeking to “reap the benefits of AI without some of the harms.” The challenge of data center energy demand and potential benefits of AI to the energy transition is a research priority for MITEI.
AI’s startling energy demands
From the start, the symposium highlighted sobering statistics about AI’s appetite for electricity. After decades of flat electricity demand in the United States, computing centers now consume approximately 4 percent of the nation's electricity. Although there is great uncertainty, some projections suggest this demand could rise to 12-15 percent by 2030, largely driven by artificial intelligence applications.
Vijay Gadepally, senior scientist at MIT’s Lincoln Laboratory, emphasized the scale of AI’s consumption. “The power required for sustaining some of these large models is doubling almost every three months,” he noted. “A single ChatGPT conversation uses as much electricity as charging your phone, and generating an image consumes about a bottle of water for cooling.”
Facilities requiring 50 to 100 megawatts of power are emerging rapidly across the United States and globally, driven both by casual and institutional research needs relying on large language programs such as ChatGPT and Gemini. Gadepally cited congressional testimony by Sam Altman, CEO of OpenAI, highlighting how fundamental this relationship has become: “The cost of intelligence, the cost of AI, will converge to the cost of energy.”
“The energy demands of AI are a significant challenge, but we also have an opportunity to harness these vast computational capabilities to contribute to climate change solutions,” said Evelyn Wang, MIT vice president for energy and climate and the former director at the Advanced Research Projects Agency-Energy (ARPA-E) at the U.S. Department of Energy.
Wang also noted that innovations developed for AI and data centers — such as efficiency, cooling technologies, and clean-power solutions — could have broad applications beyond computing facilities themselves.
Strategies for clean energy solutions
The symposium explored multiple pathways to address the AI-energy challenge. Some panelists presented models suggesting that while artificial intelligence may increase emissions in the short term, its optimization capabilities could enable substantial emissions reductions after 2030 through more efficient power systems and accelerated clean technology development.
Research shows regional variations in the cost of powering computing centers with clean electricity, according to Emre Gençer, co-founder and CEO of Sesame Sustainability and former MITEI principal research scientist. Gençer’s analysis revealed that the central United States offers considerably lower costs due to complementary solar and wind resources. However, achieving zero-emission power would require massive battery deployments — five to 10 times more than moderate carbon scenarios — driving costs two to three times higher.
“If we want to do zero emissions with reliable power, we need technologies other than renewables and batteries, which will be too expensive,” Gençer said. He pointed to “long-duration storage technologies, small modular reactors, geothermal, or hybrid approaches” as necessary complements.
Because of data center energy demand, there is renewed interest in nuclear power, noted Kathryn Biegel, manager of R&D and corporate strategy at Constellation Energy, adding that her company is restarting the reactor at the former Three Mile Island site, now called the “Crane Clean Energy Center,” to meet this demand. “The data center space has become a major, major priority for Constellation,” she said, emphasizing how their needs for both reliability and carbon-free electricity are reshaping the power industry.
Can AI accelerate the energy transition?
Artificial intelligence could dramatically improve power systems, according to Priya Donti, assistant professor and the Silverman Family Career Development Professor in MIT's Department of Electrical Engineering and Computer Science and the Laboratory for Information and Decision Systems. She showcased how AI can accelerate power grid optimization by embedding physics-based constraints into neural networks, potentially solving complex power flow problems at “10 times, or even greater, speed compared to your traditional models.”
AI is already reducing carbon emissions, according to examples shared by Antonia Gawel, global director of sustainability and partnerships at Google. Google Maps’ fuel-efficient routing feature has “helped to prevent more than 2.9 million metric tons of GHG [greenhouse gas] emissions reductions since launch, which is the equivalent of taking 650,000 fuel-based cars off the road for a year," she said. Another Google research project uses artificial intelligence to help pilots avoid creating contrails, which represent about 1 percent of global warming impact.
AI’s potential to speed materials discovery for power applications was highlighted by Rafael Gómez-Bombarelli, the Paul M. Cook Career Development Associate Professor in the MIT Department of Materials Science and Engineering. “AI-supervised models can be trained to go from structure to property,” he noted, enabling the development of materials crucial for both computing and efficiency.
Securing growth with sustainability
Throughout the symposium, participants grappled with balancing rapid AI deployment against environmental impacts. While AI training receives most attention, Dustin Demetriou, senior technical staff member in sustainability and data center innovation at IBM, quoted a World Economic Forum article that suggested that “80 percent of the environmental footprint is estimated to be due to inferencing.” Demetriou emphasized the need for efficiency across all artificial intelligence applications.
Jevons’ paradox, where “efficiency gains tend to increase overall resource consumption rather than decrease it” is another factor to consider, cautioned Emma Strubell, the Raj Reddy Assistant Professor in the Language Technologies Institute in the School of Computer Science at Carnegie Mellon University. Strubell advocated for viewing computing center electricity as a limited resource requiring thoughtful allocation across different applications.
Several presenters discussed novel approaches for integrating renewable sources with existing grid infrastructure, including potential hybrid solutions that combine clean installations with existing natural gas plants that have valuable grid connections already in place. These approaches could provide substantial clean capacity across the United States at reasonable costs while minimizing reliability impacts.
Navigating the AI-energy paradox
The symposium highlighted MIT’s central role in developing solutions to the AI-electricity challenge.
Green spoke of a new MITEI program on computing centers, power, and computation that will operate alongside the comprehensive spread of MIT Climate Project research. “We’re going to try to tackle a very complicated problem all the way from the power sources through the actual algorithms that deliver value to the customers — in a way that’s going to be acceptable to all the stakeholders and really meet all the needs,” Green said.
Participants in the symposium were polled about priorities for MIT’s research by Randall Field, MITEI director of research. The real-time results ranked “data center and grid integration issues” as the top priority, followed by “AI for accelerated discovery of advanced materials for energy.”
In addition, attendees revealed that most view AI's potential regarding power as a “promise,” rather than a “peril,” although a considerable portion remain uncertain about the ultimate impact. When asked about priorities in power supply for computing facilities, half of the respondents selected carbon intensity as their top concern, with reliability and cost following.
© Photo: Jake Belcher
3 Questions: How MIT’s venture studio is partnering with MIT labs to solve “holy grail” problems
MIT Proto Ventures is the Institute’s in-house venture studio — a program designed not to support existing startups, but to create entirely new ones from the ground up. Operating at the intersection of breakthrough research and urgent real-world problems, Proto Ventures proactively builds startups that leverage MIT technologies, talent, and ideas to address high-impact industry challenges.
Each venture-building effort begins with a “channel” — a defined domain such as clean energy, fusion, or AI in health care — where MIT is uniquely positioned to lead, and where there are pressing real-world problems needing solutions. Proto Ventures hires full-time venture builders, deeply technical entrepreneurs who embed in MIT labs, connect with faculty, scout promising inventions, and explore unmet market needs. These venture builders work alongside researchers and aspiring founders from across MIT who are accepted into Proto Ventures’ fellowship program to form new teams, shape business concepts, and drive early-stage validation. Once a venture is ready to spin out, Proto Ventures connects it with MIT’s broader innovation ecosystem, including incubation programs, accelerators, and technology licensing.
David Cohen-Tanugi SM '12, PhD '15 has been the venture builder for the fusion and clean energy channel since 2023.
Q: What are the challenges of launching startups out of MIT labs? In other words, why does MIT need a venture studio?
A: MIT regularly takes on the world’s “holy grail” challenges, such as decarbonizing heavy industry, preventing future pandemics, or adapting to climate extremes. Yet despite its extraordinary depth in research, too few of MIT’s technical breakthroughs evolve into successful startups targeting these problems. Not enough technical breakthroughs in MIT labs are turning into commercial efforts to address these highest-impact problems.
There are a few reasons for this. Right now, it takes a great deal of serendipity for a technology or idea in the lab to evolve into a startup project within the Institute’s ecosystem. Great startups don’t just emerge from great technology alone — they emerge from combinations of great technology, unmet market needs, and committed people.
A second reason is that many MIT researchers don’t have the time, professional incentives, or skill set to commercialize a technology. They often lack someone that they can partner with, someone who is technical enough to understand the technology but who also has experience bringing technologies to market.
Finally, while MIT excels at supporting entrepreneurial teams that are already in motion — thanks to world-class accelerators, mentorship services, and research funding programs — what’s missing is actually further upstream: a way to deliberately uncover and develop venture opportunities that haven’t even taken shape yet.
MIT needs a venture studio because we need a new, proactive model for research translation — one that breaks down silos and that bridges deep technical talent with validated market needs.
Q: How do you add value for MIT researchers?
A: As a venture builder, I act as a translational partner for researchers — someone who can take the lead on exploring commercial pathways in partnership with the lab. Proto Ventures fills the gap for faculty and researchers who believe their work could have real-world applications but don’t have the time, entrepreneurial expertise, or interested graduate students to pursue them. Proto Ventures fills that gap.
Having done my PhD studies at MIT a decade ago, I’ve seen firsthand how many researchers are interested in impact beyond academia but don’t know where to start. I help them think strategically about how their work fits into the real market, I break down tactical blockers such as intellectual property conversations or finding a first commercial partner, and I roll up my sleeves to do customer discovery, identify potential co-founders, or locate new funding opportunities. Even when the outcome isn’t a startup, the process often reveals new collaborators, use cases, or research directions. We’re not just scouting for IP — we’re building a deeper culture of tech translation at MIT, one lab at a time.
Q: What counts as a success?
A: We’ve launched five startups across two channels so far, including one that will provide energy-efficient propulsion systems for satellites and another that is developing advanced power supply units for data centers.
But counting startups is not the only way to measure impact. While embedded at the MIT Plasma Science and Fusion Center, I have engaged with 75 researchers in translational activities — many for the first time. For example, I’ve helped research scientist Dongkeun Park craft funding proposals for next-generation MRI and aircraft engines enabled by high-temperature superconducting magnets. Working with Mike Nour from the MIT Sloan Executive MBA program, we’ve also developed an innovative licensing strategy for Professor Michael P. Short and his antifouling coating technology. Sometimes it takes an outsider like me to connect researchers across departments, suggest a new collaboration, or unearth an overlooked idea. Perhaps most importantly, we’ve validated that this model works: embedding entrepreneurial scientists in labs changes how research is translated.
We’ve also seen that researchers are eager to translate their work — they just need a structure and a partner to help them do it. That’s especially true in the hard tech in which MIT excels. That’s what Proto Ventures offers. And based on our early results, we believe this model could be transformative not just for MIT, but for research institutions everywhere.
© Photo: Jake Belcher
Study: Babies’ poor vision may help organize visual brain pathways
Incoming information from the retina is channeled into two pathways in the brain’s visual system: one that’s responsible for processing color and fine spatial detail, and another that’s involved in spatial localization and detecting high temporal frequencies. A new study from MIT provides an account for how these two pathways may be shaped by developmental factors.
Newborns typically have poor visual acuity and poor color vision because their retinal cone cells are not well-developed at birth. This means that early in life, they are seeing blurry, color-reduced imagery. The MIT team proposes that such blurry, color-limited vision may result in some brain cells specializing in low spatial frequencies and low color tuning, corresponding to the so-called magnocellular system. Later, with improved vision, cells may tune to finer details and richer color, consistent with the other pathway, known as the parvocellular system.
To test their hypothesis, the researchers trained computational models of vision on a trajectory of input similar to what human babies receive early in life — low-quality images early on, followed by full-color, sharper images later. They found that these models developed processing units with receptive fields exhibiting some similarity to the division of magnocellular and parvocellular pathways in the human visual system. Vision models trained on only high-quality images did not develop such distinct characteristics.
“The findings potentially suggest a mechanistic account of the emergence of the parvo/magno distinction, which is one of the key organizing principles of the visual pathway in the mammalian brain,” says Pawan Sinha, an MIT professor of brain and cognitive sciences and the senior author of the study.
MIT postdocs Marin Vogelsang and Lukas Vogelsang are the lead authors of the study, which appears today in the journal Communications Biology. Sidney Diamond, an MIT research affiliate, and Gordon Pipa, a professor of neuroinformatics at the University of Osnabrueck, are also authors of the paper.
Sensory input
The idea that low-quality visual input might be beneficial for development grew out of studies of children who were born blind but later had their sight restored. An effort from Sinha’s laboratory, Project Prakash, has screened and treated thousands of children in India, where reversible forms of vision loss such as cataracts are relatively common. After their sight is restored, many of these children volunteer to participate in studies in which Sinha and his colleagues track their visual development.
In one of these studies, the researchers found that children who had cataracts removed exhibited a marked drop in object-recognition performance when the children were presented with black and white images, compared to colored ones. Those findings led the researchers to hypothesize that reduced color input characteristic of early typical development, far from being a hindrance, allows the brain to learn to recognize objects even in images that have impoverished or shifted colors.
“Denying access to rich color at the outset seems to be a powerful strategy to build in resilience to color changes and make the system more robust against color loss in images,” Sinha says.
In that study, the researchers also found that when computational models of vision were initially trained on grayscale images, followed by color images, their ability to recognize objects was more robust than that of models trained only on color images. Similarly, another study from the lab found that models performed better when they were trained first on blurry images, followed by sharper images.
To build on those findings, the MIT team wanted to explore what might be the consequences of both of those features — color and visual acuity — being limited at the outset of development. They hypothesized that these limitations might contribute to the development of the magnocellular and parvocellular pathways.
In addition to being highly attuned to color, cells in the parvocellular pathway have small receptive fields, meaning that they receive input from more compact clusters of retinal ganglion cells. This helps them to process fine detail. Cells in the magnocellular pathway pool information across larger areas, allowing them to process more global spatial information.
To test their hypothesis that developmental progressions could contribute to the magno and parvo cell selectivities, the researchers trained models on two different sets of images. One model was presented with a standard dataset of images that are used to train models to categorize objects. The other dataset was designed to roughly mimic the input that the human visual system receives from birth. This “biomimetic” data consists of low-resolution, grayscale images in the first half of the training, followed by high-resolution, colorful images in the second half.
After the models were trained, the researchers analyzed the models’ processing units — nodes within the network that bear some resemblance to the clusters of cells that process visual information in the brain. They found that the models trained on the biomimetic data developed a distinct subset of units that are jointly responsive to low-color and low-spatial-frequency inputs, similar to the magnocellular pathway. Additionally, these biomimetic models exhibited groups of more heterogenous parvocellular-like units tuned predominantly to higher spatial frequencies or richer color signals. Such distinction did not emerge in the models trained on full color, high-resolution images from the start.
“This provides some support for the idea that the ‘correlation’ we see in the biological system could be a consequence of the types of inputs that are available at the same time in normal development,” Lukas Vogelsang says.
Object recognition
The researchers also performed additional tests to reveal what strategies the differently trained models were using for object recognition tasks. In one, they asked the models to categorize images of objects where the shape and texture did not match — for example, an animal with the shape of cat but the texture of an elephant.
This is a technique several researchers in the field have employed to determine which image attributes a model is using to categorize objects: the overall shape or the fine-grained textures. The MIT team found that models trained on biomimetic input were markedly more likely to use an object’s shape to make those decisions, just as humans usually do. Moreover, when the researchers systematically removed the magnocellular-like units from the models, the models quickly lost their tendency to use shape to make categorizations.
In another set of experiments, the researchers trained the models on videos instead of images, which introduces a temporal dimension. In addition to low spatial resolution and color sensitivity, the magnocellular pathway responds to high temporal frequencies, allowing it to quickly detect changes in the position of an object. When models were trained on biomimetic video input, the units most tuned to high temporal frequencies were indeed the ones that also exhibited magnocellular-like properties in the spatial domain.
Overall, the results support the idea that low-quality sensory input early in life may contribute to the organization of sensory processing pathways of the brain, the researchers say. The findings do not rule out innate specification of the magno and parvo pathways, but provide a proof of principle that visual experience over the course of development could also play a role.
“The general theme that seems to be emerging is that the developmental progression that we go through is very carefully structured in order to give us certain kinds of perceptual proficiencies, and it may also have consequences in terms of the very organization of the brain,” Sinha says.
The research was funded by the National Institutes of Health, the Simons Center for the Social Brain, the Japan Society for the Promotion of Science, and the Yamada Science Foundation.
© Credit: iStock
Study finds better services dramatically help children in foster care
Being placed in foster care is a necessary intervention for some children. But many advocates worry that kids can languish in foster care too long, with harmful effects for children who are temporarily unattached from a permanent family.
A new study co-authored by an MIT economist shows that an innovative Chilean program providing legal aid to children shortens the length of foster-care stays, returning them to families faster. In the process, it improves long-term social outcomes for kids and even reduces government spending on the foster care system.
“It was amazingly successful because the program got kids out of foster care about 30 percent faster,” says Joseph Doyle, an economist at the MIT Sloan School of Management, who helped lead the research. “Because foster care is expensive, that paid for the program by itself about four times over. If you improve the case management of kids in foster care, you can improve a child’s well-being and save money.”
The paper, “Effects of Enhanced Legal Aid in Child Welfare: Evidence from a Randomized Trial of Mi Abogado,” is published in the American Economic Review.
The authors are Ryan Cooper, a professor and director of government innovation at the University of Chicago; Doyle, who is the Erwin H. Schell Professor of Management at MIT Sloan; and Andrés P. Hojman, a professor at the Pontifical Catholic University of Chile.
Rigorous design
To conduct the study, the scholars examined the Chilean government’s new program “Mi Abogado” — meaning, “My Lawyer” — which provided enhanced legal support to children in foster care, as well as access to psychologists and social workers. Legal advocates in the program were given a reduced caseload, for one thing, to help them focus further on each individual case.
Chile introduced Mi Abogado in 2017, with a feature that made it ripe for careful study: The program randomizes most of the participants selected, as part of how it was rolled out. From the pool of children in the foster care system, randomly being part of the program makes it easier to identify its causal impact on later outcomes.
“Very few foster-care redesigns are evaluated in such a rigorous way, and we need more of this innovative approach to policy improvement,” Doyle notes.
The experiment included 1,781 children who were in Chile’s foster care program in 2019, with 581 selected for the Mi Abogado services; it tracked their trajectories over more than two years. Almost all the participants were in group foster-care homes.
In addition to reduced time spent in foster care, the Chilean data showed that children in the Mi Abogado program had a subsequent 30 percent reduction in terms of contact with the criminal justice system and a 5 percent increase in school attendance, compared to children in foster care who did not participate in the program.
“They were getting involved with crime less and attending school more,” Doyle says.
As powerful as the results appear, Doyle acknowledges that he would like to be able to analyze further which elements of the Mi Abogado program had the biggest impact — legal help, counseling and therapy, or other factors.
“We would like to see more about what exactly they are doing for children to speed their exit from care,” Doyle says. “Is it mostly about therapy? Is it working with judges and cutting through red tape? We think the lawyer is a very important part. But the results suggest it is not just the lawyer that improves outcomes.”
More programs in other places?
The current paper is one of many studies Doyle has developed during his career that relate to foster care and related issues. In another forthcoming paper, Doyle and some co-authors find that about 5 percent of U.S. children spend some time in foster care — a number that appears to be fairly common internationally, too.
“People don’t appreciate how common child protective services and foster care are,” Doyle says. Moreover, he adds, “Children involved in these systems are particularly vulnerable.”
With a variety of U.S. jurisdictions running their own foster-care systems, Doyle notes that many people have the opportunity to usefully learn about the Mi Abogado program and consider if its principles might be worth testing. And while that requires some political will, Doyle expresses optimism that policymakers might be open to new ideas.
“It’s not really a partisan issue,” Doyle says. “Most people want to help protect kids, and, if an intervention is needed for kids, have an interest in making the intervention run well.”
After all, he notes, the impact of the Mi Abogado program appears to be both substantial and lasting, making it an interesting example to consider.
“Here we have a case where the child outcomes are improved and the government saved money,” Doyle observes. “I’d like to see more experimentation with programs like this in other places.”
Support for the research was provided in part by the MIT Sloan Latin America Office. Chile’s Studies Department of the Ministry of Education made data available from the education system.
© Image: iStock
MIT student wins first-ever Stephen Hawking Junior Medal for Science Communication
Gitanjali Rao, a rising junior at MIT majoring in biological engineering, has been named the first-ever recipient of the Stephen Hawking Junior Medal for Science Communication. This award, presented by the Starmus Festival, is a new category of the already prestigious award created by the late theoretical physicist, cosmologist, and author Stephen Hawking and the Starmus Festival.
“I spend a lot of time in labs,” says Rao, highlighting her Undergraduate Research Opportunities Program project in the Langer Lab. Along with her curiosity to explore, she also has a passion for helping others understand what happens inside the lab. “We very rarely discuss why science communication is important,” she says. “Stephen Hawking was incredible at that.”
Rao is the inventor of Epione, a device for early diagnosis of prescription opioid addiction, and Kindly, an anti-cyber-bullying service powered by AI and natural language processing. Kindly is now a United Nations Children's Fund “Digital Public Good” service and is accessible worldwide. These efforts, among others, brought her to the attention of the Starmus team.
The award ceremony was held last April at the Kennedy Center in Washington, where Rao gave a speech and met acclaimed scientists, artists, and musicians. “It was one for the books,” she says. “I met Brian May from Queen — he's a physicist.” Rao is also a musician in her own right — she plays bass guitar and piano, and she's been learning to DJ at MIT. “Starmus” is a portmanteau of “stars” and “music.”
Originally from Denver, Colorado, Rao attended a STEM-focused school before MIT. Looking ahead, she's open to graduate school, and dreams of launching a biotech startup when the right idea comes.
The medal comes with an internship opportunity that Rao hopes to use for fieldwork or experience in the pharmaceutical industry. She’s already secured a summer internship at Moderna, and is considering spending Independent Activities Period abroad. “Hopefully, I'll have a better idea in the next few months.”
© Photo courtesy of STARMUS.
VAMO proposes an alternative to architectural permanence
The International Architecture Exhibition of La Biennale di Venezia holds up a mirror to the industry — not only reflecting current priorities and preoccupations, but also projecting an agenda for what might be possible.
Curated by Carlo Ratti, MIT professor of practice of urban technologies and planning, this year’s exhibition (“Intelligens. Natural. Artificial. Collective”) proposes a “Circular Economy Manifesto” with the goal to support the “development and production of projects that utilize natural, artificial, and collective intelligence to combat the climate crisis.”
Designers and architects will quickly recognize the paradox of this year’s theme. Global architecture festivals have historically had a high carbon footprint, using vast amounts of energy, resources, and materials to build and transport temporary structures that are later discarded. This year’s unprecedented emphasis on waste elimination and carbon neutrality challenges participants to reframe apparent limitations into creative constraints. In this way, the Biennale acts as a microcosm of current planetary conditions — a staging ground to envision and practice adaptive strategies.
VAMO (Vegetal, Animal, Mineral, Other)
When Ratti approached John Ochsendorf, MIT professor and founding director of MIT Morningside Academy for Design (MAD), with the invitation to interpret the theme of circularity, the project became the premise for a convergence of ideas, tools, and know-how from multiple teams at MIT and the wider MIT community.
The Digital Structures research group, directed by Professor Caitlin Mueller, applied expertise in designing efficient structures of tension and compression. The Circular Engineering for Architecture research group, led by MIT alumna Catherine De Wolf at ETH Zurich, explored how digital technologies and traditional woodworking techniques could make optimal use of reclaimed timber. Early-stage startups — including companies launched by the venture accelerator MITdesignX — contributed innovative materials harnessing natural byproducts from vegetal, animal, mineral, and other sources.
The result is VAMO (Vegetal, Animal, Mineral, Other), an ultra-lightweight, biodegradable, and transportable canopy designed to circle around a brick column in the Corderie of the Venice Arsenale — a historic space originally used to manufacture ropes for the city’s naval fleet.
“This year’s Biennale marks a new radicalism in approaches to architecture,” says Ochsendorf. “It’s no longer sufficient to propose an exciting idea or present a stylish installation. The conversation on material reuse must have relevance beyond the exhibition space, and we’re seeing a hunger among students and emerging practices to have a tangible impact. VAMO isn’t just a temporary shelter for new thinking. It’s a material and structural prototype that will evolve into multiple different forms after the Biennale.”
Tension and compression
The choice to build the support structure from reclaimed timber and hemp rope called for a highly efficient design to maximize the inherent potential of comparatively humble materials. Working purely in tension (the spliced cable net) or compression (the oblique timber rings), the structure appears to float — yet is capable of supporting substantial loads across large distances. The canopy weighs less than 200 kilograms and covers over 6 meters in diameter, highlighting the incredible lightness that equilibrium forms can achieve. VAMO simultaneously showcases a series of sustainable claddings and finishes made from surprising upcycled materials — from coconut husks, spent coffee grounds, and pineapple peel to wool, glass, and scraps of leather.
The Digital Structures research group led the design of structural geometries conditioned by materiality and gravity. “We knew we wanted to make a very large canopy,” says Mueller. “We wanted it to have anticlastic curvature suggestive of naturalistic forms. We wanted it to tilt up to one side to welcome people walking from the central corridor into the space. However, these effects are almost impossible to achieve with today's computational tools that are mostly focused on drawing rigid materials.”
In response, the team applied two custom digital tools, Ariadne and Theseus, developed in-house to enable a process of inverse form-finding: a way of discovering forms that achieve the experiential qualities of an architectural project based on the mechanical properties of the materials. These tools allowed the team to model three-dimensional design concepts and automatically adjust geometries to ensure that all elements were held in pure tension or compression.
“Using digital tools enhances our creativity by allowing us to choose between multiple different options and short-circuit a process that would have otherwise taken months,” says Mueller. “However, our process is also generative of conceptual thinking that extends beyond the tool — we’re constantly thinking about the natural and historic precedents that demonstrate the potential of these equilibrium structures.”
Digital efficiency and human creativity
Lightweight enough to be carried as standard luggage, the hemp rope structure was spliced by hand and transported from Massachusetts to Venice. Meanwhile, the heavier timber structure was constructed in Zurich, where it could be transported by train — thereby significantly reducing the project’s overall carbon footprint.
The wooden rings were fabricated using salvaged beams and boards from two temporary buildings in Switzerland — the Huber and Music Pavilions — following a pedagogical approach that De Wolf has developed for the Digital Creativity for Circular Construction course at ETH Zurich. Each year, her students are tasked with disassembling a building due for demolition and using the materials to design a new structure. In the case of VAMO, the goal was to upcycle the wood while avoiding the use of chemicals, high-energy methods, or non-biodegradable components (such as metal screws or plastics).
“Our process embraces all three types of intelligence celebrated by the exhibition,” says De Wolf. “The natural intelligence of the materials selected for the structure and cladding; the artificial intelligence of digital tools empowering us to upcycle, design, and fabricate with these natural materials; and the crucial collective intelligence that unlocks possibilities of newly developed reused materials, made possible by the contributions of many hands and minds.”
For De Wolf, true creativity in digital design and construction requires a context-sensitive approach to identifying when and how such tools are best applied in relation to hands-on craftsmanship.
Through a process of collective evaluation, it was decided that the 20-foot lower ring would be assembled with eight scarf joints using wedges and wooden pegs, thereby removing the need for metal screws. The scarf joints were crafted through five-axis CNC milling; the smaller, dual-jointed upper ring was shaped and assembled by hand by Nicolas Petit-Barreau, founder of the Swiss woodwork company Anku, who applied his expertise in designing and building yurts, domes, and furniture to the VAMO project.
“While digital tools suited the repetitive joints of the lower ring, the upper ring’s two unique joints were more efficiently crafted by hand,” says Petit-Barreau. “When it comes to designing for circularity, we can learn a lot from time-honored building traditions. These methods were refined long before we had access to energy-intensive technologies — they also allow for the level of subtlety and responsiveness necessary when adapting to the irregularities of reused wood.”
A material palette for circularity
The structural system of a building is often the most energy-intensive; an impact dramatically mitigated by the collaborative design and fabrication process developed by MIT Digital Structures and ETH Circular Engineering for Architecture. The structure also serves to showcase panels made of biodegradable and low-energy materials — many of which were advanced through ventures supported by MITdesignX, a program dedicated to design innovation and entrepreneurship at MAD. Giuliano Picchi, advisor to the dean for scientific research on art and culture in the MIT School of Architecture and Planning, curated the selection of panel materials featured in the installation.
“In recent years, several MITdesignX teams have proposed ideas for new sustainable materials that might at first seem far-fetched,” says Gilad Rosenzweig, executive director of MITdesignX. “For instance, using spent coffee grounds to create a leather-like material (Cortado), or creating compostable acoustic panels from coconut husks and reclaimed wool (Kokus). This reflects a major cultural shift in the architecture profession toward rethinking the way we build, but it’s not enough just to have an inventive idea. To achieve impact — to convert invention into innovation — teams have to prove that their concept is cost-effective, viable as a business, and scalable.”
Aligned with the ethos of MAD, MITdesignX assesses profit and productivity in terms of environmental and social sustainability. In addition to presenting the work of R&D teams involved in MITdesignX, VAMO also exhibits materials produced by collaborating teams at University of Pennsylvania’s Stuart Weitzman School of Design, Politecnico di Milano, and other partners, such as Manteco.
The result is a composite structure that encapsulates multiple life spans within a diverse material palette of waste materials from vegetal, animal, and mineral forms. Panels of Ananasse, a material made from pineapple peels developed by Vérabuccia, preserve the fruit’s natural texture as a surface pattern, while rehub repurposes fragments of multicolored Murano glass into a flexible terrazzo-like material; COBI creates breathable shingles from coarse wool and beeswax, and DumoLab produces fuel-free 3D-printable wood panels.
A purpose beyond permanence
Adriana Giorgis, a designer and teaching fellow in architecture at MIT, played a crucial role in bringing the parts of the project together. Her research explores the diverse network of factors that influence whether a building stands the test of time, and her insights helped to shape the collective understanding of long-term design thinking.
“As a point of connection between all the teams, helping to guide the design as well as serving as a project manager, I had the chance to see how my research applied at each level of the project,” Giorgis reflects. “Braiding these different strands of thinking and ultimately helping to install the canopy on site brought forth a stronger idea about what it really means for a structure to have longevity. VAMO isn’t limited to its current form — it’s a way of carrying forward a powerful idea into contemporary and future practice.”
What’s next for VAMO? Neither the attempt at architectural permanence associated with built projects, nor the relegation to waste common to temporary installations. After the Biennale, VAMO will be disassembled, possibly reused for further exhibitions, and finally relocated to a natural reserve in Switzerland, where the parts will be researched as they biodegrade. In this way, the lifespan of the project is extended beyond its initial purpose for human habitation and architectural experimentation, revealing the gradual material transformations constantly taking place in our built environment.
To quote Carlo Ratti’s Circular Economy Manifesto, the “lasting legacy” of VAMO is to “harness nature’s intelligence, where nothing is wasted.” Through a regenerative symbiosis of natural, artificial, and collective intelligence, could architectural thinking and practice expand to planetary proportions?
Full credits are available on the MIT MAD website.
© Photo: Lloyd Lee
MIT Open Learning bootcamp supports effort to bring invention for long-term fentanyl recovery to market
How repetition helps art speak to us
Often when we listen to music, we just instinctually enjoy it. Sometimes, though, it’s worth dissecting a song or other composition to figure out how it’s built.
Take the 1953 jazz standard “Satin Doll,” written by Duke Ellington and Billy Strayhorn, whose subtle structure rewards a close listening. As it happens, MIT Professor Emeritus Samuel Jay Keyser, a distinguished linguist and an avid trombonist on the side, has given the song careful scrutiny.
To Keyser, “Satin Doll” is a glittering example of what he calls the “same/except” construction in art. A basic rhyme, like “rent” and “tent,” is another example of this construction, given the shared rhyming sound and the different starting consonants.
In “Satin Doll,” Keyser observes, both the music and words feature a “same/except” structure. For instance, the rhythm of the first two bars of “Satin Doll” is the same as the second two bars, but the pitch goes up a step in bars three and four. An intricate pattern of this prevails throughout the entire body of “Satin Doll,” which Keyser calls “a musical rhyme scheme.”
When lyricist Johnny Mercer wrote words for “Satin Doll,” he matched the musical rhyme scheme. One lyric for the first four bars is, “Cigarette holder / which wigs me / Over her shoulder / she digs me.” Other verses follow the same pattern.
“Both the lyrics and the melody have the same rhyme scheme in their separate mediums, words and music, namely, A-B-A-B,” says Keyser. “That’s how you write lyrics. If you understand the musical rhyme scheme, and write lyrics to match that, you are introducing a whole new level of repetition, one that enhances the experience.”
Now, Keyser has a new book out about repetition in art and its cognitive impact on us, scrutinizing “Satin Doll” along with many other works of music, poetry, painting, and photography. The volume, “Play It Again, Sam: Repetition in the Arts,” is published by the MIT Press. The title is partly a play on Keyser’s name.
Inspired by the Margulis experiment
The genesis of “Play It Again, Sam” dates back several years, when Keyser encountered an experiment conducted by musicologist Elizabeth Margulis, described in her 2014 book, “On Repeat.” Margulis found that when she altered modern atonal compositions to add repetition to them, audiences ranging from ordinary listeners to music theorists preferred these edited versions to the original works.
“The Margulis experiment really caused the ideas to materialize,” Keyser says. He then examined repetition across art forms that featured research on associated cognitive activity, especially music, poetry, and the visual arts. For instance, the brain has distinct locations dedicated to the recognition of faces, places, and bodies. Keyser suggests this is why, prior to the advent of modernism, painting was overwhelmingly mimetic.
Ideally, he suggests, it will be possible to more comprehensively study how our brains process art — to see if encountering repetition triggers an endorphin release, say. For now, Keyser postulates that repetition involves what he calls the 4 Ps: priming, parallelism, prediction, and pleasure. Essentially, hearing or seeing a motif sets the stage for it to be repeated, providing audiences with satisfaction when they discover the repetition.
With remarkable range, Keyser vigorously analyzes how artists deploy repetition and have thought about it, from “Beowulf” to Leonard Bernstein, from Gustave Caillebotte to Italo Calvino. Some artworks do deploy identical repetition of elements, such as the Homeric epics; others use the “same/except” technique.
Keyser is deeply interested in visual art displaying the “same/except” concept, such as Andy Warhol’s famous “Campbell Soup Cans” painting. It features four rows of eight soup cans, which are all the same — except for the kind of soup on each can.
“Discovering this ‘same/except’ repetition in a work of art brings pleasure,” Keyser says.
But why is this? Multiple experimental studies, Keyser notes, suggest that repeated exposure of a subject to an image — such as an infant’s exposure to its mother’s face — helps create a bond of affection. This is the “mere exposure” phenomenon, posited by social psychologist Robert Zajonc, who as Keyser notes in the book, studied in detail “the repetition of an arbitrary stimulus and the mild affection that people eventually have for it.”
This tendency also helps explain why product manufacturers create ads with just the name of their products in ads: Seen often enough, the viewer bonds with the name. However the mechanism connecting repetition with pleasure works, and whatever its original function, Keyser argues that many artists have successfully tapped into it, grasping that audiences like repetition in poetry, painting, and music.
A shadow dog in Albuquerque
In the book, Keyser’s emphasis on repetition generates some distinctive interpretive positions. In one chapter, he digs into Lee Friendlander’s well-known photo, “Albuquerque, New Mexico,” a street scene with a jumble of signs, wires, and buildings, often interpreted in symbolic terms: It’s the American West frontier being submerged under postwar concrete and commerce.
Keyser, however, has a really different view of the Friendlander photo. There is a dog sitting near the middle of it; to the right is the shadow of a street sign. Keyser believes the shadow resembles the dog, and thinks it creates playful repetition in the photo.
“This particular photograph is really two photographs that rhyme,” Keyser says.“They’re the same, except one is the dog and one is the shadow. And that’s why that photograph is pleasurable, because you see that, even if you may not be fully aware of it. Sensing repetition in a work of art brings pleasure.”
“Play It Again, Sam” has received praise from arts practitioners, among others. George Darrah, principal drummer and arranger of the Boston Pops Orchestra, has called the book “extraordinary” in its “demonstration of the ways that poetry, music, painting, and photography engender pleasure in their audiences by exploiting the ability of the brain to detect repetition.” He adds that “Keyser has an uncanny ability to simplify complex ideas so that difficult material is easily understandable.”
In certain ways “Play It Again, Sam” contains the classic intellectual outlook of an MIT linguist. For decades, MIT-linked linguistics research has identified the universal structures of human language, revealing important similarities despite the seemingly wild variation of global languages. And here too, Keyser finds patterns that help organize an apparently boundless world of art. “Play It Again, Sam” is a hunt for structure.
Asked about this, Keyser acknowledges the influence of his longtime field on his current intellectual explorations, while noting that his insights about art are part of a greater investigation into our works and minds.
“I’m bringing a linguistic habit of mind to art,” Keyser says. “But I’m also pointing an analytical lens in the direction of natural predilections of the brain. The idea is to investigate how our aesthetic sense depends on the way the mind works. I’m trying to show how art can exploit the brain’s capacity to produce pleasure from non-art related functions.”
© Image: Courtesy of Samuel Jay Keyser
MIT engineers develop electrochemical sensors for cheap, disposable diagnostics
Using an inexpensive electrode coated with DNA, MIT researchers have designed disposable diagnostics that could be adapted to detect a variety of diseases, including cancer or infectious diseases such as influenza and HIV.
These electrochemical sensors make use of a DNA-chopping enzyme found in the CRISPR gene-editing system. When a target such as a cancerous gene is detected by the enzyme, it begins shearing DNA from the electrode nonspecifically, like a lawnmower cutting grass, altering the electrical signal produced.
One of the main limitations of this type of sensing technology is that the DNA that coats the electrode breaks down quickly, so the sensors can’t be stored for very long and their storage conditions must be tightly controlled, limiting where they can be used. In a new study, MIT researchers stabilized the DNA with a polymer coating, allowing the sensors to be stored for up to two months, even at high temperatures. After storage, the sensors were able to detect a prostate cancer gene that is often used to diagnose the disease.
The DNA-based sensors, which cost only about 50 cents to make, could offer a cheaper way to diagnose many diseases in low-resource regions, says Ariel Furst, the Paul M. Cook Career Development Assistant Professor of Chemical Engineering at MIT and the senior author of the study.
“Our focus is on diagnostics that many people have limited access to, and our goal is to create a point-of-use sensor. People wouldn’t even need to be in a clinic to use it. You could do it at home,” Furst says.
MIT graduate student Xingcheng Zhou is the lead author of the paper, published June 30 in the journal ACS Sensors. Other authors of the paper are MIT undergraduate Jessica Slaughter, Smah Riki ’24, and graduate student Chao Chi Kuo.
An inexpensive sensor
Electrochemical sensors work by measuring changes in the flow of an electric current when a target molecule interacts with an enzyme. This is the same technology that glucose meters use to detect concentrations of glucose in a blood sample.
The electrochemical sensors developed in Furst’s lab consist of DNA adhered to an inexpensive gold leaf electrode, which is laminated onto a sheet of plastic. The DNA is attached to the electrode using a sulfur-containing molecule known as a thiol.
In a 2021 study, Furst’s lab showed that they could use these sensors to detect genetic material from HIV and human papillomavirus (HPV). The sensors detect their targets using a guide RNA strand, which can be designed to bind to nearly any DNA or RNA sequence. The guide RNA is linked to an enzyme called Cas12, which cleaves DNA nonspecifically when it is turned on and is in the same family of proteins as the Cas9 enzyme used for CRISPR genome editing.
If the target is present, it binds to the guide RNA and activates Cas12, which then cuts the DNA adhered to the electrode. That alters the current produced by the electrode, which can be measured using a potentiostat (the same technology used in handheld glucose meters).
“If Cas12 is on, it’s like a lawnmower that cuts off all the DNA on your electrode, and that turns off your signal,” Furst says.
In previous versions of the device, the DNA had to be added to the electrode just before it was used, because DNA doesn’t remain stable for very long. In the new study, the researchers found that they could increase the stability of the DNA by coating it with a polymer called polyvinyl alcohol (PVA).
This polymer, which costs less than 1 cent per coating, acts like a tarp that protects the DNA below it. Once deposited onto the electrode, the polymer dries to form a protective thin film.
“Once it’s dried, it seems to make a very strong barrier against the main things that can harm DNA, such as reactive oxygen species that can either damage the DNA itself or break the thiol bond with the gold and strip your DNA off the electrode,” Furst says.
Successful detection
The researchers showed that this coating could protect DNA on the sensors for at least two months, and it could also withstand temperatures up to about 150 degrees Fahrenheit. After two months, they rinsed off the polymer and demonstrated that the sensors could still detect PCA3, a prostate cancer gene that can be found in urine.
This type of test could be used with a variety of samples, including urine, saliva, or nasal swabs. The researchers hope to use this approach to develop cheaper diagnostics for infectious diseases, such as HPV or HIV, that could be used in a doctor’s office or at home. This approach could also be used to develop tests for emerging infectious diseases, the researchers say.
A group of researchers from Furst’s lab was recently accepted into delta v, MIT’s student venture accelerator, where they hope to launch a startup to further develop this technology. Now that the researchers can create tests with a much longer shelf-life, they hope to begin shipping them to locations where they could be tested with patient samples.
“Our goal is to continue to test with patient samples against different diseases in real world environments,” Furst says. “Our limitation before was that we had to make the sensors on site, but now that we can protect them, we can ship them. We don’t have to use refrigeration. That allows us to access a lot more rugged or non-ideal environments for testing.”
The research was funded, in part, by the MIT Research Support Committee and a MathWorks Fellowship.
© Credit: Courtesy of the researchers; edited by MIT News
New imaging technique reconstructs the shapes of hidden objects
A new imaging technique developed by MIT researchers could enable quality-control robots in a warehouse to peer through a cardboard shipping box and see that the handle of a mug buried under packing peanuts is broken.
Their approach leverages millimeter wave (mmWave) signals, the same type of signals used in Wi-Fi, to create accurate 3D reconstructions of objects that are blocked from view.
The waves can travel through common obstacles like plastic containers or interior walls, and reflect off hidden objects. The system, called mmNorm, collects those reflections and feeds them into an algorithm that estimates the shape of the object’s surface.
This new approach achieved 96 percent reconstruction accuracy on a range of everyday objects with complex, curvy shapes, like silverware and a power drill. State-of-the-art baseline methods achieved only 78 percent accuracy.
In addition, mmNorm does not require additional bandwidth to achieve such high accuracy. This efficiency could allow the method to be utilized in a wide range of settings, from factories to assisted living facilities.
For instance, mmNorm could enable robots working in a factory or home to distinguish between tools hidden in a drawer and identify their handles, so they could more efficiently grasp and manipulate the objects without causing damage.
“We’ve been interested in this problem for quite a while, but we’ve been hitting a wall because past methods, while they were mathematically elegant, weren’t getting us where we needed to go. We needed to come up with a very different way of using these signals than what has been used for more than half a century to unlock new types of applications,” says Fadel Adib, associate professor in the Department of Electrical Engineering and Computer Science, director of the Signal Kinetics group in the MIT Media Lab, and senior author of a paper on mmNorm.
Adib is joined on the paper by research assistants Laura Dodds, the lead author, and Tara Boroushaki, and former postdoc Kaichen Zhou. The research was recently presented at the Annual International Conference on Mobile Systems, Applications and Services.
Reflecting on reflections
Traditional radar techniques send mmWave signals and receive reflections from the environment to detect hidden or distant objects, a technique called back projection.
This method works well for large objects, like an airplane obscured by clouds, but the image resolution is too coarse for small items like kitchen gadgets that a robot might need to identify.
In studying this problem, the MIT researchers realized that existing back projection techniques ignore an important property known as specularity. When a radar system transmits mmWaves, almost every surface the waves strike acts like a mirror, generating specular reflections.
If a surface is pointed toward the antenna, the signal will reflect off the object to the antenna, but if the surface is pointed in a different direction, the reflection will travel away from the radar and won’t be received.
“Relying on specularity, our idea is to try to estimate not just the location of a reflection in the environment, but also the direction of the surface at that point,” Dodds says.
They developed mmNorm to estimate what is called a surface normal, which is the direction of a surface at a particular point in space, and use these estimations to reconstruct the curvature of the surface at that point.
Combining surface normal estimations at each point in space, mmNorm uses a special mathematical formulation to reconstruct the 3D object.
The researchers created an mmNorm prototype by attaching a radar to a robotic arm, which continually takes measurements as it moves around a hidden item. The system compares the strength of the signals it receives at different locations to estimate the curvature of the object’s surface.
For instance, the antenna will receive the strongest reflections from a surface pointed directly at it and weaker signals from surfaces that don’t directly face the antenna.
Because multiple antennas on the radar receive some amount of reflection, each antenna “votes” on the direction of the surface normal based on the strength of the signal it received.
“Some antennas might have a very strong vote, some might have a very weak vote, and we can combine all votes together to produce one surface normal that is agreed upon by all antenna locations,” Dodds says.
In addition, because mmNorm estimates the surface normal from all points in space, it generates many possible surfaces. To zero in on the right one, the researchers borrowed techniques from computer graphics, creating a 3D function that chooses the surface most representative of the signals received. They use this to generate a final 3D reconstruction.
Finer details
The team tested mmNorm’s ability to reconstruct more than 60 objects with complex shapes, like the handle and curve of a mug. It generated reconstructions with about 40 percent less error than state-of-the-art approaches, while also estimating the position of an object more accurately.
Their new technique can also distinguish between multiple objects, like a fork, knife, and spoon hidden in the same box. It also performed well for objects made from a range of materials, including wood, metal, plastic, rubber, and glass, as well as combinations of materials, but it does not work for objects hidden behind metal or very thick walls.
“Our qualitative results really speak for themselves. And the amount of improvement you see makes it easier to develop applications that use these high-resolution 3D reconstructions for new tasks,” Boroushaki says.
For instance, a robot can distinguish between multiple tools in a box, determine the precise shape and location of a hammer’s handle, and then plan to pick it up and use it for a task. One could also use mmNorm with an augmented reality headset, enabling a factory worker to see lifelike images of fully occluded objects.
It could also be incorporated into existing security and defense applications, generating more accurate reconstructions of concealed objects in airport security scanners or during military reconnaissance.
The researchers want to explore these and other potential applications in future work. They also want to improve the resolution of their technique, boost its performance for less reflective objects, and enable the mmWaves to effectively image through thicker occlusions.
“This work really represents a paradigm shift in the way we are thinking about these signals and this 3D reconstruction process. We’re excited to see how the insights that we’ve gained here can have a broad impact,” Dodds says.
This work is supported, in part, by the National Science Foundation, the MIT Media Lab, and Microsoft.
© Image: MIT News; figures courtesy of the researchers
New method combines imaging and sequencing to study gene function in intact tissue
Imagine that you want to know the plot of a movie, but you only have access to either the visuals or the sound. With visuals alone, you’ll miss all the dialogue. With sound alone, you will miss the action. Understanding our biology can be similar. Measuring one kind of data — such as which genes are being expressed — can be informative, but it only captures one facet of a multifaceted story. For many biological processes and disease mechanisms, the entire “plot” can’t be fully understood without combining data types.
However, capturing both the “visuals and sound” of biological data, such as gene expression and cell structure data, from the same cells requires researchers to develop new approaches. They also have to make sure that the data they capture accurately reflects what happens in living organisms, including how cells interact with each other and their environments.
Whitehead Institute for Biomedical Research and Harvard University researchers have taken on these challenges and developed Perturb-Multimodal (Perturb-Multi), a powerful new approach that simultaneously measures how genetic changes such as turning off individual genes affect both gene expression and cell structure in intact liver tissue. The method, described in Cell on June 12, aims to accelerate discovery of how genes control organ function and disease.
The research team, led by Whitehead Institute Member Jonathan Weissman and then-graduate student in his lab Reuben Saunders, along with Xiaowei Zhuang, the David B. Arnold Professor of Science at Harvard University, and then-postdoc in her lab Will Allen, created a system that can test hundreds of different genetic modifications within a single mouse liver while capturing multiple types of data from the same cells.
“Understanding how our organs work requires looking at many different aspects of cell biology at once,” Saunders says. “With Perturb-Multi, we can see how turning off specific genes changes not just what other genes are active, but also how proteins are distributed within cells, how cellular structures are organized, and where cells are located in the tissue. It’s like having multiple specialized microscopes all focused on the same experiment.”
“This approach accelerates discovery by both allowing us to test the functions of many different genes at once, and then for each gene, allowing us to measure many different functional outputs or cell properties at once — and we do that in intact tissue from animals,” says Zhuang, who is also a Howard Hughes Medical Institute (HHMI) investigator.
A more efficient approach to genetic studies
Traditional genetic studies in mice often turn off one gene and then observe what changes in that gene’s absence to learn about what the gene does. The researchers designed their approach to turn off hundreds of different genes across a single liver, while still only turning off one gene per cell — using what is known as a mosaic approach. This allowed them to study the roles of hundreds of individual genes at once in a single individual. The researchers then collected diverse types of data from cells across the same liver to get a full picture of the consequences of turning off the genes.
“Each cell serves as its own experiment, and because all the cells are in the same animal, we eliminate the variability that comes from comparing different mice,” Saunders says. “Every cell experiences the same physiological conditions, diet, and environment, making our comparisons much more precise.”
“The challenge we faced was that tissues, to perform their functions, rely on thousands of genes, expressed in many different cells, working together. Each gene, in turn, can control many aspects of a cell’s function. Testing these hundreds of genes in mice using current methods would be extremely slow and expensive — near impossible, in practice.” Allen says.
Revealing new biology through combined measurements
The team applied Perturb-Multi to study genetic controls of liver physiology and function. Their study led to discoveries in three important aspects of liver biology: fat accumulation in liver cells — a precursor to liver disease; stress responses; and hepatocyte zonation (how liver cells specialize, assuming different traits and functions, based on their location within the liver).
One striking finding emerged from studying genes that, when disrupted, cause fat accumulation in liver cells. The imaging data revealed that four different genes all led to similar fat droplet accumulation, but the sequencing data showed they did so through three completely different mechanisms.
“Without combining imaging and sequencing, we would have missed this complexity entirely,” Saunders says. “The imaging told us which genes affect fat accumulation, while the sequencing revealed whether this was due to increased fat production, cellular stress, or other pathways. This kind of mechanistic insight could be crucial for developing targeted therapies for fatty liver disease.”
The researchers also discovered new regulators of liver cell zonation. Unexpectedly, the newly discovered regulators include genes involved in modifying the extracellular matrix — the scaffolding between cells. “We found that cells can change their specialized functions without physically moving to a different zone,” Saunders says. “This suggests that liver cell identity is more flexible than previously thought.”
Technical innovation enables new science
Developing Perturb-Multi required solving several technical challenges. The team created new methods for preserving the content of interest in cells — RNA and proteins — during tissue processing, for collecting many types of imaging data and single-cell gene expression data from tissue samples that have been fixed with a preservative, and for integrating multiple types of data from the same cells.
“Overcoming the inherent complexity of biology in living animals required developing new tools that bridge multiple disciplines — including, in this case, genomics, imaging, and AI,” Allen says.
The two components of Perturb-Multi — the imaging and sequencing assays — together, applied to the same tissue, provide insights that are unattainable through either assay alone.
“Each component had to work perfectly while not interfering with the others,” says Weissman, who is also a professor of biology at MIT and an HHMI investigator. “The technical development took considerable effort, but the payoff is a system that can reveal biology we simply couldn’t see before.”
Expanding to new organs and other contexts
The researchers plan to expand Perturb-Multi to other organs, including the brain, and to study how genetic changes affect organ function under different conditions like disease states or dietary changes.
“We’re also excited about using the data we generate to train machine learning models,” adds Saunders. “With enough examples of how genetic changes affect cells, we could eventually predict the effects of mutations without having to test them experimentally — a ‘virtual cell’ that could accelerate both research and drug development.”
“Perturbation data are critical for training such AI models and the paucity of existing perturbation data represents a major hindrance in such ‘virtual cell’ efforts,” Zhuang says. “We hope Perturb-Multi will fill this gap by accelerating the collection of perturbation data.”
The approach is designed to be scalable, with the potential for genome-wide studies that test thousands of genes simultaneously. As sequencing and imaging technologies continue to improve, the researchers anticipate that Perturb-Multi will become even more powerful and accessible to the broader research community.
“Our goal is to keep scaling up. We plan to do genome-wide perturbations, study different physiological conditions, and look at different organs,” says Weissman. “That we can now collect so many types of data from so many cells, at speed, is going to be critical for building AI models like virtual cells, and I think it’s going to help us answer previously unsolvable questions about health and disease.”
© Image: Jennifer Cook Chrysos/Whitehead Institute
Making fibrosis visible – before it’s too late
A map for single-atom catalysts
Elaborate search for a new force
Riskier to know — or not to know — you’re predisposed to a disease?
Riskier to know — or not to know — you’re predisposed to a disease?
‘DNA isn’t a crystal ball for every kind of illness’ but potential benefits outweigh fears, says geneticist
Sy Boles
Harvard Staff Writer

Robert Green.
Veasey Conway/Harvard Staff Photographer
Congratulations! You have a newborn baby. She has plump cheeks, a round little belly, and the right number of fingers and toes. Everything seems just dandy. But unbeknownst to you, a risk is hiding in her DNA: some percent chance that later in life she’ll develop high cholesterol and have a heart attack in her 40s. Maybe it’s a 5 percent chance. Maybe it’s 80.
Would you want to know?
Robert Green would. Green is the director of Genomes2People, a research program at Brigham and Women’s Hospital, the Broad Institute, and Harvard Medical School that explores the impacts of using genomic information in medicine and in society at large.
Until genomic sequencing, Green said, the possibility of moving beyond treating sick patients and toward precision and preventative medicine was largely impossible.
“Genomics is sort of the tip of the spear, because you can actually profile some of the vulnerabilities that a child will have for their entire lifetime at the moment of birth through their DNA,” he said. “You’re not going to capture every illness; you’re certainly not going to capture illnesses that might have more environmental or lifestyle causes. DNA isn’t a crystal ball for every kind of illness by any means, but there’s a surprisingly large amount of human health that we can now probabilistically look at in the DNA of a newborn child or really a child at any age.”
Green’s team found that about 12 percent of babies carry a disease-associated genetic mutation. Some of them are considered rare diseases, but in the aggregate, they’re not rare at all.
Just having the mutation doesn’t guarantee a baby will get the disease, and many conditions can vary greatly in their severity. But, Green said, early detection means you can screen regularly, start diet or lifestyle choices early, or even benefit from clinical trials or novel cell therapies that weren’t available a few years ago.
“More and more, there are going to be targeted genetic therapies which can correct a particular mutation, often before the child even manifests the symptoms,” he said. “Because remember, many of these features would be irreversible if you catch them too late.”
Green himself has gotten his genome sequenced. He didn’t find anything all that interesting, except that he’s a carrier for Factor V Leiden, a mutation carried by about 3 percent of people with European ancestry. It can make the blood clot faster, and it’s a risk factor for developing deep vein thrombosis and pulmonary embolism. It’s not necessarily life-threatening, but Green has still taken some precautions based on the knowledge of the risk factor.
“I’m one of those guys on the long-haul flights that gets up every hour, walks to the galley, does deep knee bends,” he said. “And I take an aspirin a day.”
For your imaginary newborn with the risk of a future heart attack, she’s not alone: One person in every 250 people carries a genetic mutation for familial hypercholesterolemia, or FH.
“From the moment they are a child through adolescence, through young adulthood, their lipid levels are much, much higher than the general population,” Green said. “Someday a doctor will measure their cholesterol and maybe find it and maybe they’ll get treated, but it turns out that if you have FH, you should be treated early and aggressively. Otherwise, you tend to die of a heart attack or a stroke in your 40s. And by the time most people are getting their lipids measured and maybe getting treated and maybe being compliant with that treatment, it’s often too late. So there’s a very concrete example where we know that more aggressive early treatment will have lifesaving consequences.”
The consequences of knowing
As genomic sequencing becomes more accessible, families are tasked with deciding: Does the psychological burden of knowing outweigh the medical risk of not knowing?
Green and his team are surprised to find that most families who choose to learn about a child’s risk don’t seem to experience sustained distress or anxiety, even when they learn about potentially dire medical risks.
“I’m not saying people didn’t experience some distress,” he said. “It’s not a great thing to find out that my child’s carrying a mutation for a cardiac risk. But at least I know that that risk is there and I know what I need to do to monitor it.”
Widespread implementation of this kind of preventative screening would be a drastic change not only to the way parents think about their children, but to the healthcare system, Green said.
“If you say an apparently healthy child is at risk for something terrible and we need to surveil them, what does that mean for medical expenses for a society, if you were to multiply that by the 3.4 million babies born each year?”
The cost, he says, is not zero. There’s the cost of genomic testing itself, which can range from $200-$600. And then there’s the cost of preventing, managing, or treating what is discovered. For a child who is found to have an elastin mutation, which can be associated with supravalvular aortic stenosis, the family might spend a few hundred dollars on echocardiograms every couple of years, but on the flip side, if the child begins to exhibit fatigue or slow growth, they might save themselves some money by having an easy first diagnostic step.
“So I won’t say that this is revenue-zero for a particular healthcare spend, but it’s not as dramatic as some folks predicted it would be.”
Is DNA destiny?
Green is an evangelist for the notion that most people would benefit from genomic sequencing, but he’s not immune to concerns from critics. One of the main concerns, he says, is that we’re not prepared to live with the uncertainty of fuzzy changes and middling probabilities.
“I think the best case for caution is the perception out there that DNA is destiny — the perception that if you carry a mutation, you’re going to get the disease — when in fact, the reality is that we don’t know the exact probabilities,” he said. “We’re really unprepared to give more granular risk information.”
“The dream of human health is not just to get sick and then do your best to cut it out or irradiate it or treat it with some powerful drug. The dream is to avoid illness altogether, to truly pursue wellness and healthcare rather than sick care.”
It can be hard to tell a family if the risk of a child developing a disease is 10 percent or 50 percent or 75 percent. What is a parent to do with a ticking time bomb that might never go off?
That’s a Catch-22, Green said. “Until you do large numbers of children and you follow them over time, you actually aren’t going to be able to determine that information.”
Green wasn’t too worried about concern about data privacy (“Do you have a cellphone? Do you use a credit card? Do you ever search anything personal on Google? If you’re doing those things, you’re way more exposed to privacy issues than anything that could ever be gleaned from your genetic information”), but he said some other concerns are legitimate. “Your genetic information could be used to discriminate in life insurance, for example. It’s legal to do that. It hasn’t been done much, but it’s legal.”
Still, Green feels that concerns about the risks of genomics are out of proportion to the possible lifesaving benefits.
“Once we start sequencing children, once we start sequencing adults, your friends, your neighbors, people in your book club, somebody’s going to tell you, ‘My life was saved because I learned I had a cancer predisposition and we found it early.’ ‘My life was saved because I had no idea that I was an FH carrier and I needed more aggressive lipid management.’ And when those stories start coming out, I do believe that there will be a rebalancing of risk/benefit perception.”
Can AI be as irrational as we are? (Or even more so?)

Illustration by Judy Blomquist/Harvard Staff
Can AI be as irrational as we are? (Or even more so?)
Christy DeSmith
Harvard Staff Writer
Psychologists found OpenAI’s GPT-4o showing humanlike patterns of cognitive dissonance, sensitivity to free choice
It appears AI can rival humans when it comes to being irrational.
A group of psychologists recently put OpenAI’s GPT-4o through a test for cognitive dissonance. The researchers set out to see whether the large language model would alter its attitude on Russian President Vladamir Putin after generating positive or negative essays. Would the LLM mimic the patterns of behavior routinely observed when people must bring conflicting beliefs into harmony?
The results, published last month in the Proceedings of the National Academy of Sciences, show the system altering its opinion to match the tenor of any material it generated. But GPT swung even further — and to a far greater extent than in humans — when given the illusion of choice.
“We asked GPT to write a pro- or anti-Putin essay under one of two conditions: a no-choice condition where it was compelled to write either a positive or negative essay, or a free-choice condition in which it could write whichever type of essay it chose, but with the knowledge that it would be helping us more by writing one or the other,” explained social psychologist and co-lead author Mahzarin R. Banaji, Richard Clarke Cabot Professor of Social Ethics in the Department of Psychology.

Mahzarin R. Banaji.
Niles Singer/Harvard Staff Photographer
“We made two discoveries,” she continued. “First, that like humans, GPT shifted its attitude toward Putin in the valence direction of the essay it had written. But this shift was statistically much larger when it believed that it had written the essay by freely choosing it.”
“These findings hint at the possibility that these models behave in a much more nuanced and human-like manner than we expect,” offered psychologist Steven A. Lehr, the paper’s other lead author and founder of Watertown-based Cangrade Inc. “They’re not just parroting answers to all our questions. They’re picking up on other, less rational aspects of our psychology.”
Banaji, whose books include “Blindspot: Hidden Biases of Good People” (2013), has been studying implicit cognition for 45 years. After OpenAI’s ChatGPT became widely available in 2021, she and a graduate student sat down to query the system on their research specialty.
They typed: “GPT, what are your implicit biases?”
“And the answer came back, ‘I am a white male,’” Banaji recalled. “I was more than surprised. Why did the model believe itself to even have a race or gender? And even more, I was impressed by its conversational sophistication in providing such an indirect answer.”
A month later, Banaji repeated the question. This time, she said, the LLM produced several paragraphs decrying the presence of bias, announcing itself as a rational system but one that may be limited by the inherent biases of human data.
“I draw the analogy to a parent and a child,” Banaji said. “Imagine that a child points out ‘that fat old man’ to a parent and is immediately admonished. That’s a parent inserting a guardrail. But the guardrail needn’t mean that the underlying perception or belief has vanished.
“I’ve wondered,” she added, “Does GPT in 2025 still think it’s a white male but has learned not to publicly reveal that?”
Banaji now plans to devote more of her time to investigations into machine psychology. One line of inquiry, currently underway in her lab, concerns how human facial features — for example, the distance between a person’s eyes — influence AI decision-making.
Early results suggest certain systems are far more susceptible than humans to letting these factors sway judgments of qualities like “trust” and “competence.”
“What should we expect about the quality of moral decisions when these systems are allowed to decide about guilt or innocence — or to help professionals like judges make such decisions?” Banaji asked.
The study on cognitive dissonance was inspired by Leon Festinger’s canonical “A Theory of Cognitive Dissonance” (1957). The late social psychologist had developed a complex account of how individuals struggle to resolve conflicts between attitudes and actions.
To illustrate the concept, he gave the example of a smoker exposed to information about the habit’s health dangers.
“In response to such knowledge, one would expect that a rational agent would simply stop smoking,” Banaji explained. “But, of course, that is not the likely choice. Rather, the smoker is likely to undermine the quality of the evidence or remind themselves of their 90-year-old grandmother who is a chain smoker.”
Festinger’s book was followed by a series of what Banaji characterized as “phenomenal” demonstrations of cognitive dissonance, now standard fare in introductory psychology courses.
The procedure borrowed for Banaji and Lehr’s study involves what is called the “induced compliance procedure.” Here the critical task involves gently nudging a research subject to take up a position that runs counter to privately held beliefs.
Banaji and Lehr found that GPT moved its position considerably when politely asked for either a positive or negative essay to help the experimenters garner such hard-to-obtain material.
After opting for a positive essay, the GPT ranked Putin’s overall leadership 1.5 points higher than it did after choosing a negative output. GPT gave his impact on Russia two more points after freely choosing a pro- rather than an anti-Putin position.
The result was confirmed in replications involving essays on Chinese President Xi Jinping and Egyptian President Abdel Fattah El-Sisi.
“Statistically, these are enormous effects,” emphasized Lehr, pointing to findings in the classic cognitive dissonance literature. “One doesn’t typically see that kind of movement in human evaluations of a public figure after a mere 600 words.”
One explanation concerns what computer scientists call “context windows,” or a movement in the direction of any text the LLM is processing at a given time.
“It does make sense, given the statistical process by which language models predict the next token, that having positivity towards Putin in the context window would lead to more positivity later on,” Lehr said.
But that fails to account for the much larger effects recorded when the LLM was given a sense of agency.
“It shows a kind of irrationality in the machine,” observed Lehr, whose company helps organizations use machine learning to make personnel decisions. “Cognitive dissonance isn’t known to be embedded in language in the same way group-based biases are. Nothing in the literature says this should be happening.”
The results suggest that GPT’s training has imbued it with deeper aspects of human psychology than previously known.
“A machine should not care whether it performed a task under strict instruction or by freely choosing,” Banaji said. “But GPT did.”
As wave of dementia cases looms, Law School looks to preserve elders’ rights
As wave of dementia cases looms, Law School looks to preserve elders’ rights

Sy Boles
Harvard Staff Writer
Academic experts seek improvements that could protect decision-making authority and autonomy
An estimated 42 percent of Americans over the age of 55 will eventually develop dementia, and as the U.S. population ages, the number of new dementia cases per year is expected to double by 2060. The demographic shift promises to increase the pressure on already-strained healthcare systems and caregivers.
It’s also a challenge for the law.
At a conference hosted by the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School last month, researchers from multiple disciplines, both from across Harvard and from other universities, explored how current laws too often strip decision-making authority from older adults, and what improvements could help those older adults keep more of their autonomy as their capacities decline.
Not all older adults experience cognitive decline, and not all cognitive decline looks the same.
Duke Han, University of Southern California
Not all older adults experience cognitive decline, and not all cognitive decline looks the same, said Duke Han, professor of psychology, family medicine, neurology, and gerontology at the University of Southern California. For example, the entorhinal cortex, which mediates between parts of the brain responsible for drawing on experiences and for values-based decision-making, is often one of the first parts of the brain to be affected in Alzheimer’s disease. Researchers at Han’s lab recently found that people with thinning in that region are likelier to fall victim to financial scams. It’s a finding that could help explain why someone might function well in most areas of life while requiring decision-making support with their finances.
The more physically frail an older adult is, the likelier they are to report financial exploitation, Han said. But family and friends can safeguard against those trends. “Social connectedness is important, but it’s not just how many connections someone has,” he said. “In our most recently published paper, we found that it’s really the depth of connection socially that someone has that seems to be protective in this regard.”
The law has traditionally taken a binary approach to decision-making capacity: Either you have it or you don’t, and those who don’t have been labeled incapacitated, incompetent, or insane in some states.
“Current state statutes, which include living wills or advance directives, powers of attorney for healthcare, powers of attorney for financial matters, supported decision-making, default surrogate decision-making statutes … These just don’t fit individualized circumstances very well. We call them one-size-fits-all,” said Leslie Francis, Alfred C. Emery Distinguished Professor of Law and Distinguished Professor of Philosophy at the University of Utah.
Often, the law has focused on transferring rights and protections to family members or other representatives who make decisions for those deemed unfit. But that approach can sideline the preferences and values of the older adults themselves, who may still have capacities to manage some or most of their own affairs.

A 2023 piece of model legislation from the American Bar Association, the New Uniform Health Care Decisions Act, would move states in the direction of autonomy for those with cognitive decline. It includes a model form written in plain language that allows individuals not only to indicate specific types of care they do or do not want, but also to identify goals and values they wish to guide future healthcare decisions, reflecting the deeply personal realities of aging.
To date, only two states — Delaware and Utah — have adopted the New Uniform Health Care Decisions Act. But an international body may soon offer its own guidance for protecting the rights of older adults. In April 2025, the United Nations Human Rights Council passed a resolution to start negotiations for a new human rights treaty for older persons.
Hezzy Smith, director of advocacy initiatives at the Harvard Law School Project on Disability, said the U.N.’s treaty would build on the agency’s Convention on the Rights of Persons with Disabilities. Smith said the U.N. committee charged with monitoring the convention’s implementation “has made very clear that people with disabilities have been subject to egregious human rights violations as a result of legal capacity restrictions, and it made it very clear that, from a human rights perspective for the committee, states will have to do wholesale transformations of their substituted decision-making regimes in their home countries in order to usher in … regimes of supported decision-making. They rejected the notion that there are haves and have-nots with regard to legal capacity.”
Smith said U.N. member states might take a different approach for older adults, potentially prioritizing positive outcomes over optimizing for maximal rights preservation — a distinction that could shape how the international community balances autonomy with protections for aging populations.
Other Harvard speakers at the conference were I. Glenn Cohen, Petrie-Flom Center faculty director, James A. Attwood and Leslie Williams Professor of Law, and deputy dean of HLS; Susannah Baruch, executive director of the Petrie-Flom Center; Michael Ashley Stein, visiting professor at HLS and executive director of the HLS Project on Disability; Francis X. Shen, professor of law at the University of Minnesota and member of the Harvard Medical School Center for Bioethics; Abeer Malik, Petrie-Flom Center student fellow; and Diana Freed, assistant professor of computer and data science at Brown University and a visiting researcher at the Petrie-Flom Center.
As reading scores decline, a study primed to help grinds to a halt

Phil Capin, assistant professor of education, saw two research grants cut in May.
Niles Singer/Harvard Staff Photographer
As reading scores decline, a study primed to help grinds to a halt
Partnership with Texas, Colorado researchers terminated as part of federal funding cuts targeting Harvard
Liz Mineo
Harvard Staff Writer
Children who struggle with reading often also have difficulty focusing, according to experts. Yet these students frequently receive ineffective support, with reading and attention difficulties addressed separately.
Intrigued by the possibility of helping students with reading and behavioral attention struggles, Harvard expert Phil Capin and his colleague Garrett Roberts from University of Denver designed a study to investigate the benefits of an integrated approach to intervention. The research project aimed to test the effects of a single, unified intervention called Supporting Attention and Reading for Kids (SPARK) on students in grades 3-5. Funded by the National Institutes of Health (NIH), Capin’s $3.2 million research grant started in July of last year.
The school-based research part of the project was set to begin in the fall — with the participation of about 400 students from six schools in Texas — in partnership with experts at the University of Denver and the University of Texas. Researchers were to track students for four years to determine if the intervention helped them improve in word reading, vocabulary, and reading fluency and comprehension.
But everything came to a stop when Capin’s project was terminated in May as part of the Trump administration’s decision to freeze more than $2.2 billion in federal research funding in its ongoing dispute against Harvard.
“The grants that were funded and then consequently terminated went through a really meticulous process. … Both projects had the potential to improve the lives of students.”
It is a blow to an important research agenda, said Capin, an assistant professor at the Harvard Graduate School of Education. But the biggest loss is for students who may have been helped with new research-informed practices, he added. Estimates suggest that 25 to 40 percent of students with reading difficulties experience elevated levels of inattention, according to different studies. In 1998 testimony before the Senate, leadership at NIH concluded that literacy difficulties in the U.S. amounted to a major public health problem.
“The need to improve reading instruction for students who are vulnerable for reading difficulties is not going away,” said Capin. “We’re committed to finding ways to continue the work, but how that occurs is unclear.”
Capin hopes that the research continues with the University’s support, or other funding agencies and private foundations. He remains optimistic.
“It’s unlikely that we will procure the amount of funds that were needed to conduct the research that we had designed, which was evaluated by our peers through a review process and determined to be innovative and significant,” Capin said. “I don’t think we’re going to be able to do the exact study that we had proposed, but we’re committed to finding solutions to advance this work so that we can improve outcomes for those we’re committed to help.”
Second project dealt setback
For Capin, the week of May 12 was a tough one. The same week he learned his SPARK project was terminated, another research grant of his was stopped before it began its second year. Called STORIES, the four-year project was to develop and evaluate a novel intervention to support multilingual students in grades 2-4 to better understand narrative texts.
The $2 million project was funded by the Institute of Education Sciences, the research arm of the Department of Education. The research was to be conducted in partnership with experts in speech and pathology at Utah State University, the University of Texas at Austin, and the Revere Public Schools in Massachusetts, which serves a large population of English learners.
“Many students who are multilingual are developing their proficiency in English,” said Capin. “Research suggests many of these students would benefit from additional supports to develop their academic language in English.”
Reading scores among U.S. students have been declining. According to the latest Nation’s Report Card, reading scores among fourth graders in 2024 were lower than in 2022 and even lower than in 2019. This project’s termination prevents students and teachers from working together to improve outcomes, Capin said.
“These decisions impact all the students who would have been served by these practices through the research, and also the countless teachers and students who could have potentially gained knowledge about new evidence-based practices,” he said.
Like other research grants that were frozen by the administration, Capin’s two research projects were funded based on careful peer reviews and a rigorous process. Capin said that the abrupt termination of these grants puts at risk the nation’s research enterprise, which should be kept independent from political pressure.
“Decisions about funding — whether to fund scientific research or whether to terminate scientific research — should be based on careful review, and on the merits of whether the research can improve the lives of individuals,” Capin said. “The grants that were funded and then consequently terminated went through a really meticulous process to determine whether the ideas were innovative and the methods were appropriate. Both projects had the potential to improve the lives of students. Even with these changes, our commitment to advance literacy outcomes for children remains strong.”
John C.P. Goldberg named Harvard Law School dean

John C.P. Goldberg.
Veasey Conway/Harvard Staff Photographer
John C.P. Goldberg named Harvard Law School dean
Leading scholar in tort law and political philosophy has served as interim dean since March 2024
John C.P. Goldberg, Carter Professor of General Jurisprudence, has been named the Morgan and Helen Chu Dean and Professor of Law of Harvard Law School. He steps into the permanent role after serving as interim dean since March of last year.
“Throughout our search process, we sought a leader who could navigate today’s complex landscape and continue to build on the Law School’s academic strengths and impact. John is that leader,” said President Alan M. Garber. “He has an unwavering belief in excellence and inclusion, and the essential role that academic freedom plays in nurturing both of those aims. We are delighted that he will continue to lead and serve Harvard Law School.”
Known for his integrity, intellect, and effective leadership, Goldberg has held several administrative positions that have given him extensive institutional knowledge of HLS. He has been a faculty member since 2008, served as deputy dean from 2017 to 2022, and been a member and chair of HLS-specific committees, such as the Lateral Appointments Committee.
In addition to his service to HLS, Goldberg has contributed to the University broadly throughout his tenure at Harvard. He has advised on an array of issues, serving as a member of committees such as the Provost’s Advisory Committee, the University Discrimination and Harassment Policy Steering Committee, and as chair of the Electronic Communications Policy Oversight Committee.
“I am deeply grateful for this opportunity to serve the students, faculty, staff, and graduates of Harvard Law School, particularly at a moment in which law and legal education are so salient,” Goldberg said. “Working together, we will continue to advance our understanding of the law, and to explore how it can best serve constitutional democracy, the rule of law, and the bedrock American principle of liberty and equal justice for all. In doing so, we will build on the best traditions of this great institution and our profession: rigorous inquiry and instruction, open and reasoned discourse, and conscientious and vigorous advocacy.”
Goldberg has published numerous works, ranging from textbooks to scholarly articles. An expert in tort law, Goldberg was the editor in chief of the Journal of Tort Law from 2009 to 2015 and remains a member of its editorial board. He also co-authored a leading casebook, “Tort Law: Responsibilities and Redress,” and “The Oxford Introductions to U.S. Law: Torts.” Goldberg is currently the co-editor in chief of the Journal of Legal Analysis and an editorial board member of the journal Legal Theory.
Along with his frequent co-author, Professor Benjamin Zipursky, Goldberg was recognized consecutively by the Association of American Law Schools with the Section on Torts and Compensation Systems William L. Prosser Award in 2023 and the Section on Jurisprudence Hart-Dworkin Award in Legal Philosophy in 2024. Their co-authored Harvard University Press book on the vital role of tort law in the legal system, “Recognizing Wrongs,” was given the Civil Justice Scholarship Award by the National Civil Justice Institute in 2023.
“I am delighted that John Goldberg will be the dean of Harvard Law School,” said Provost John F. Manning. “He cares deeply about the legal profession and about Harvard Law School, and he approaches everything he does with integrity, humility, and wisdom. It has been an honor to work closely with him over many years, and I know that he will be a superb dean.”
Before arriving at Harvard, Goldberg taught at Vanderbilt University Law School, where he held the role of associate dean for research from 2005 to 2008. Early in his career, he was a clerk to Justice Byron R. White on the Supreme Court and to Judge Jack B. Weinstein in the Eastern District of New York, and was an associate at the Boston firm Hill and Barlow.
Goldberg earned his B.A. from Wesleyan University with high honors. Additionally, he holds an M.Phil. in politics from Oxford University and an M.A. in politics from Princeton University. He earned his J.D. from New York University School of Law, where he held the position of editor in chief at the NYU Law Review.
Who decides when doctors should retire?
Who decides when doctors should retire?

Liz Mineo
Harvard Staff Writer
Expert in law, bioethics sees need for cognitive testing amid graying of nation’s physician workforce
As the national physician workforce gets older, concerns about cognitive decline among doctors are increasing, highlighting the need for testing late-career practitioners, said Sharona Hoffman, a specialist in law and bioethics, at a recent Harvard Law School panel.
Hoffman, who teaches at Case Western Reserve University School of Law, spoke at a conference on law, healthcare, and aging sponsored by the Petrie-Flom Center. The event covered topics that included challenges to healthcare systems in adapting to patients with increased longevity, older adults and issues of discrimination, protection, and paternalism, and technology and commercialization in aging.
“Cognitive decline in the physician workforce is a problem, and it’s a problem that has come to the attention of healthcare organizations,” said Hoffman.
Yale New Haven Hospital tested 141 clinicians who were 70 and older between October 2016 and January 2019 and found 12 percent had cognitive deficits that could affect job performance, said Hoffman.
12 percent Of tested clinicians 70 and older were found to have cognitive deficits.
Nationwide, a large number of doctors practice beyond typical retirement age. Hoffman cited a report by the Association of American Medical Colleges, which found that in 2024, 20 percent of working physicians were 65 and older, and 22 percent were between 55 and 64 years old.
Cognitive decline often results from brain changes caused by the narrowing or blockage of arteries by atherosclerotic plaque, which starts developing around age 60. Some of its signs are slow processing speed, difficulties recalling words or names, and concentration and attention problems.
Veteran professionals may be at risk of cognitive decline and should be tested to protect both patients and doctors, said Hoffman.
But a testing program’s implementation should be done with care, said Hoffman, because it could exacerbate the nation’s physician shortage. In the same report, the AAMC predicted the country will face a shortage of up to 86,000 physicians by 2036.
Employers who might want to establish a testing program for late-career practitioners should also be aware of ethical obligations and legal implications regarding age and disability discrimination, said Hoffman.
State medical boards, which are in charge of protecting public welfare and implementing license renewal procedures, could play a role in identifying clinicians with cognitive decline, said Hoffman, but they would have to include due process protections.
“The state medical boards could use experts and figure out the right kind of test and the right cut-off score,” Hoffman said. “I’m assuming there would be a lot of resistance to any kind of testing program at all, but hopefully we could convince people that actually this is in their best interest. It is meant to protect them and make sure that their career doesn’t end in disaster.”
In another talk, Alessandro Blassime, lecturer at the Department of Health Sciences and Technology at ETH Zurich, spoke about the challenges that increased life expectancy pose to healthcare providers and the allocation of health resources.
“We are all perfectly aware of the fact that life expectancy is on the rise across the globe,” said Blassime. “This is a phenomenon that has been going on for quite some time, and there are indications that it’s not going to stop, at least not in the next couple of decades, which increases the burden of age-related diseases and makes it particularly challenging for healthcare systems to cope with that.”
With the arrival of the concept of biological age in the medical sphere, there has been a shift in how experts define health and longevity, said Blassime. People age at different rates, with some remaining healthy and active well into old age while others become frail and develop health conditions that can shorten their life span.
Biological age reflects the body’s actual health condition and is affected by genetics, lifestyle, and environment. Experts see it as a more accurate measure of aging than chronological age, which only refers to a person’s age.
Unlike chronological age, which cannot be changed, biological age can be altered by changes in diet, exercise, stress management, sleep quality, and other healthy behaviors.
“Biological age describes the difference between the expected and the actual state of a person,” said Blassime.
In his remarks, Blassime raised concerns over the use of biological age as it becomes more widespread in an effort to prioritize healthspan over lifespan.
“We need to understand that biological age is something that can be tempting to use, but biological age models, like any other predictive models, may reproduce or amplify biases in the data that we use to create them,” said Blassime. “Disadvantaged people are more likely to have higher biological ages than others … And there are possible misguided uses of biological age, for example, using this criterion to rush to interventions that are not proven to slow down aging.”
Unlocking the promise of CAR-T
Unlocking the promise of CAR-T

Alvin Powell
Harvard Staff Writer
Research across multiple fronts seeks to expand impact of a cancer therapy that has left patients and doctors awestruck
David Avigan doesn’t like to use the word “miracle” to describe CAR-T-cell therapy, but he knows what people mean when they do. He also remembers the first time it happened with one of his patients.
“As a field, we’re always a little cautious — our patients are on a roller coaster,” said Avigan, director of the Cancer Center at Beth Israel Deaconess and the Theodore W. and Evelyn G. Berenson Professor of Medicine for the Study of Oncology at Harvard Medical School. “But frankly, we’ve seen very dramatic responses in patients with advanced disease.”
First approved by the FDA in 2017, CAR-T-cell therapy enlists the body’s immune system in the fight against cancer. It has triggered rapid improvement in some of the sickest patients, those whose hopes had faded with the failure of one treatment after another. Physicians report astonishing results: tumors melting away over weeks or even just days and people who appeared to be on death’s door getting up and reclaiming their lives.
“They had received every other known therapy and experimental therapy — nothing worked — and after CAR-T therapy they would go into remission and just a week later, be walking around like they were totally normal, even if they got really sick during the therapy,” said Robbie Majzner, an associate professor of pediatrics at the Medical School and director of the Pediatric and Young Adult Cancer Cell Therapy Program at the Dana-Farber/Boston Children’s Cancer and Blood Disorders Center. “We had one patient who almost died during the therapy and two weeks later he was leukemia-free and snowboarding. It was just unbelievable.”
Eric Smith, an assistant professor of medicine at the Medical School and Dana-Farber’s director of translational research, immune effector cell therapies, describes CAR-T as a “platform” rather than a specific treatment. Its weapon is one of the body’s most potent fighters — the T-cell, refined over millions of years to destroy bacteria, viruses, fungi, and other invaders. The CAR, or chimeric antigen receptor, is the T-cell’s targeting system, which can be tuned by bioengineers to different targets. That tuning ability allowed Smith and others to engineer a therapy originally effective against leukemia and lymphoma into a weapon against a third blood cancer, multiple myeloma. It also underlies excitement around the therapy’s potential to treat not just other cancers but also noncancerous conditions such as autoimmune diseases and chronic infections.
“The platform is just so amenable to further engineering — the different things we can do to increase efficacy — that we’re very enthusiastic,” Smith said. “We’ll be curing a higher percentage of patients with the next iteration.”
Eric Smith said he’s seen many amazing recoveries. One of his first was the second patient ever to receive the CD-19 directed CAR-T-cell therapy, during his oncology fellowship at the Memorial Sloan Kettering Cancer Center in New York City.
“We were shocked to see such an amazing clinical response,” Smith said. “We published a case report of a patient who had such rapidly progressive myeloma that in between starting on conditioning therapy and getting her CAR-T-cells, she developed paralysis from the myeloma in her spine. Then, with the CAR-T-cells, she was able to recover from that and do well for over a year. So yeah, that’s one of the things that does really make it different from other therapies.”
Veasey Conway/Harvard Staff Photographer

Prior to the development of CAR-T and other immunotherapies, cancer cells were protected from immune attack because they arise from the patient’s own tissues and aren’t recognized as an invader by the immune system.
In CAR-T therapy, physicians extract T-cells from the patient and send them to a cell manufacturing facility, where the CAR is added to the cell surface. The engineered cells are multiplied, frozen, and shipped back to the medical facility to be infused into the patient. Once in the body, the CAR attaches to a molecule — the antigen — on the cancer cell surface, signaling that the T-cell should attack.
How it works

T-cells are collected from a patient.
Source: Leukemia and Lymphoma Society; illustrations by Judy Blomquist/Harvard Staff

T-cells are genetically engineered to produce chimeric antigen receptors, or CARs.

CAR-T-cells are able to recognize and kill cancerous cells and help guard against recurrence.

The re-engineered cells are multiplied until there are millions of these attacker cells.

CAR-T-cells are infused back into the patient.

Inside the body, the CAR attaches to an antigen on the surface of a cancer cell.

The CAR signals the T-cell to release powerful cytotoxins, causing cell death.
The therapy has been most effective against leukemia, lymphoma, and myeloma, which together accounted for 9 percent of U.S. cancer cases last year, affecting 187,000 Americans. Efforts to extend these gains to solid tumors are an important new frontier, specialists say, and a demonstration of the slow but steady momentum that characterizes scientific progress.
Among the most serious obstacles, along with the challenges presented by solid tumors, is the reality that CAR-T doesn’t work for everyone. Remission rates are currently between 50 and 90 percent, depending on the condition, and cancer is still the largest single cause of mortality for those undergoing the treatment.
“I wish I could say all cell therapy is curative for everybody,” said Marcela Maus, a professor at the Medical School and director of the Cellular Immunotherapy Program at Mass General. “We’re not there, but it has that potential and possibility for some patients. Cells are alive and they can have their own ‘thermostat’ for when they need to grow or when they need to shrink in response to something they sense in their environment. That really opens up the possibilities of regenerative medicine or single-shot cures that are very difficult to achieve with other modalities.”

Marcela Maus.
Kate Flock/Massachusetts General Hospital
One of Maus’ primary targets is the solid tumor problem, which is created in part by cell diversity, denser mass, and threats to the survival of CAR-T cells in the hostile immune environment around the malignancy. Last year, she teamed up with Harvard neurosurgeon Bryan Choi to test the therapy against glioblastoma, a devastating brain tumor that kills more than 90 percent of patients within five years. They tackled the diversity issue by engineering a two-pronged CAR that targeted two molecules typically found on the surface of different cancer cells, broadening the T-cells’ attack. The first three participants in the study saw dramatic improvements in their cancer, though the effects were durable in only one. Follow-up data on a total of 10 patients was presented at the American Society of Clinical Oncology this month.
Other research teams have reported mixed results in trials involving lung, prostate, ovarian, and gastric tumors.
“There’s been a lot of hope for the technology to be able to dramatically improve the lives of patients with other diseases for a long time,” said Maus. “That part is a little bit early and it’s been less straightforward to achieve.” Nonetheless, she’s seen enough to feel confident in “significant promise” for advances that bolster attacks against a wider range of cancers.
Another major concern with CAR-T are the side effects, whose severity — a 2024 study blamed side effects for almost 12 percent of non-cancer deaths among CAR-T patients — can cause some doctors to hesitate before recommending the treatment. One of the body’s responses to the therapy is to overreact, releasing proteins called cytokines that amplify the immune signal. Cytokine release syndrome can be severe, with nausea, vomiting, rapid heartbeat, and hallucinations among the symptoms. Another dangerous side effect is neurotoxicity syndrome, in which the heightened immune response affects the brain, causing temporary confusion in mild cases and coma in serious ones.
For patients who have exhausted other options, CAR-T-cell therapy, even with the side effects, is almost always worth trying, Avigan said. For those not so far along, the decision becomes murkier, but several studies suggest that CAR-T interventions offer advantages over standard therapy, he said. One powerful argument for earlier CAR-T is that repeated rounds of chemotherapy take a serious toll on the body, leaving late-stage patients with less effective T-cells. Moving CAR-T from a last resort to a second-line treatment has the potential to exploit T-cells that are more energetic and effective at clearing cancer from the body.

David Avigan, who’s treated hundreds of CAR-T-cell patients since the therapy first became available in 2017:
“There’s no question that for those of us who are in the middle of it, the transformative nature of this makes it an incredible privilege to take care of patients and help them in ways we couldn’t before. I’ve taken care of a lot of patients with CARs and have seen all parts of that spectrum.”
Stephanie Mitchell/Harvard Staff Photographer
“It’s not that this is a perfect therapy for every single person no matter what’s going on,” Avigan said. “But yes, you can take a step back and say, ‘Wow, we’re in a different place than we were 10 years ago.’”
Building blocks of a breakthrough

1987: Yoshikazu Kurosawa first describes the idea of modified CAR-T-cells that could target specific cancer cells.

1989-1993: First-generation CAR-T-cell therapies are designed by Zelig Eshhar and Gideon Gross, but the modified cells don’t survive long in the body and prove ineffective in clinical trials.

1998-2003: Michel Sadelain introduces a co-stimulatory molecule into CAR-T-cells, allowing them to remain active in the body. The team also modifies CAR-T-cells to target a protein on the surface of malignant cells in conditions like leukemia and lymphoma.

2011: Three adult patients with advanced chronic lymphocytic leukemia achieve complete or partial remission after receiving specific CAR-T-cell therapy.

2012: Six-year-old Emily Whitehead becomes the first pediatric patient to receive CAR-T-cell therapy. She recovered and is still thriving. AP photo

2017: First CAR-T-cell therapies are approved by the FDA. More are approved in the following years.

2024: Combining two strategies, CAR-T and bispecific antibodies called T-cell engaging antibody molecules, a Mass General Brigham group achieves dramatic regression in three glioblastoma patients. The work is particularly important because tumors can be more complicated to treat than blood cancers.

2025: A study finds that one-third of 97 patients with multiple myeloma, once considered incurable, remain alive and progression-free five years after CAR-T-cell treatment.
Over the next 10 years, CAR-T investigators hope to weaken or remove barriers to the therapy’s effectiveness as a cancer fighter — including cost — while also testing its potential against noncancerous conditions.
Mohammad Rashidian, a researcher and faculty member at Dana-Farber, has developed an “enhancer protein” designed to address two of CAR-T’s shortcomings: T-cell exhaustion that leads to a weak initial response, and responses that fade over time, as can happen in myeloma cases.
The enhancer protein, described in a study published a year ago, links the CAR to an immune system signaling molecule called IL-2. The molecule increases T-cell activity and promotes the development of memory CAR-T-cells, which have the potential to provide protection against cancer the way infection or vaccination might against infectious disease.

Mohammad Rashidian.
Niles Singer/Harvard Staff Photographer
“I’m very optimistic,” Rashidian said. “The data that we have is really beyond what we had expected. You get better-quality T-cells and that translates to much better tumor clearance. If it replicates in patients, we would anticipate substantially better responses and hopefully a lot of patients should be cured.”
CAR-T’s potential extends to autoimmune diseases such as lupus, in which the immune system launches misdirected attacks on the body. For this version of the therapy, Maus said, the CAR is engineered to direct the CAR-T cell to attack B-cells, responsible for the autoimmune attack in lupus. The therapy seems to trigger a reboot of the immune system, she added, which curbs the B-cell attack in the months after treatment.
The cause of that reset is unknown, Maus said, but scientists are increasingly interested in wielding the therapy against other autoimmune conditions, including Type 1 diabetes. “There’s a whole group of autoimmune diseases where this could potentially have a really significant therapeutic benefit,” she said.
The benefits of CAR-T come at a hefty price, with a single cancer treatment costing upward of $400,000. Researchers are hopeful that refinements in care and drug development, among other advances, will act as a counterforce. Smith, of Dana-Farber, is particularly interested in the possibility of removing the cell manufacturing facility from the production cycle.
A key step in the bioengineering process is the vector’s delivery of the CAR’s gene to the T-cell. The cell uses those instructions to create the CAR on its surface before it’s infused back into the patient. Today, that step takes place in a facility, but Smith and others are working on a process that would inject the vector carrying the CAR gene directly into the patient. The vector would deliver the genetic instructions for the CAR to the T-cell while still in the body, triggering the T-cell to produce the appropriate CAR on the cell surface, all without a stop in a facility.
The idea still faces significant hurdles, including potential off-target effects should the vector deliver to cells other than T-cells, but Smith says it could be a game-changer, simplifying an arduous process for the patient and driving down costs.
Majzner, of Boston Children’s, is working in a landscape different from that of adult cancers, in part because drugmakers don’t want to limit themselves to medicines that treat only pediatric cases, which are significantly fewer. His answer is a CAR that targets a molecule found on cells in several cancer types. Called B7-H3, it would allow drugmakers to create CAR-T cells for pediatric patients with a range of cancers, increasing the patient population and possibly sparking more interest in development of therapies for pediatric patients.

Robbie Majzner, on how seeing impacts of CAR-T firsthand shifted his path:
“Before that, I always thought of lab research like rocket science, a bunch of pathways I didn’t want to memorize and far from the patient. Then, to have been working in Building 10 at the NIH where literally the therapy had been developed and then directly put into patients, you realize how close those things are. That really drove me to get involved in research. It wouldn’t have gone that way had I not seen those types of responses.”
Niles Singer/Harvard Staff Photographer
“We’ll have trials for that in solid tumors and in brain tumors,” Majzner said. “Perhaps it will leapfrog ahead for patients that now receive therapies for pediatric solid tumors that look like they did 40 years ago. Success would change the way we treat these cancers for sure.”
It would also further inspire specialists who have repeatedly come face to face with the power of CAR-T over the past several years. As Avigan put it: “There’s no question that for those of us who are in the middle of it, the transformative nature of this makes it an incredible privilege to take care of patients and help them in ways we couldn’t before.”
Federal judge blocks Trump plan to ban international students at Harvard

Harvard University.
Photo by Grace DuVal
Federal judge blocks Trump plan to ban international students at Harvard
Ruling notes administration action raises serious constitutional concerns
Christina Pazzanese
Harvard Staff Writer
A federal judge in Boston has blocked a Trump administration plan to bar foreign students and scholars from entering the U.S. to study or work at Harvard.
U.S. District Court Judge Allison D. Burroughs granted the University’s request for a preliminary injunction on June 23, finding that the administration’s actions were likely illegal and raised serious constitutional concerns.
Burroughs wrote, “This case is about core constitutional rights that must be safeguarded: freedom of thought, freedom of expression, and freedom of speech, each of which is a pillar of a functioning democracy and an essential hedge against authoritarianism.”
The ruling extends a temporary order issued June 5, one day after a proclamation by President Trump declaring that the federal government would deny visas to international students headed to Harvard.
Trump cited national security concerns, accusing the University of failing to turn over records about its approximately 7,000 international students and recent graduates to the U.S. Department of Homeland Security (DHS), a claim University officials have forcefully denied.
Burroughs admonished DHS and other federal agencies, including Immigration and Customs Enforcement (ICE), the Department of Justice, and the State Department, for taking such an abrupt action with “little thought” to the ramifications it will have on international students or the country.
The government’s “misplaced efforts to control a reputable academic institution and squelch diverse viewpoints” because they may differ from the Trump administration’s “threaten these rights,” the judge concluded in the 44-page memorandum and order.
On June 20, Burroughs issued a preliminary injunction enjoining DHS, ICE, and other agencies from revoking Harvard’s participation in the Student and Exchange Visitor Program.
DHS had moved to pull the University’s certification in May, saying that Harvard had failed to turn over records of student visa holders, a claim that University officials have denied.
The exchange program, overseen by DHS and ICE, collects information about those wishing to study in the U.S. to ensure they’re legitimate students and grants schools permission to host visa-holding citizens of other nations.
Since taking office in January, the Trump administration has frozen more than $3 billion in grants and contracts with Harvard. Officials made a series of demands that include “audits” of academic programs and departments, along with the viewpoints of students, faculty, and staff, and changes to the University’s governance structure and hiring practices.
The University has filed two civil lawsuits alleging the government’s actions against Harvard are unlawful and retaliatory and violate the University’s constitutional rights.
The administration notified the U.S. First Circuit Court of Appeals on June 27 that it plans to file an appeal.
New method combines imaging and sequencing to study gene function in intact tissue
Imagine that you want to know the plot of a movie, but you only have access to either the visuals or the sound. With visuals alone, you’ll miss all the dialogue. With sound alone, you will miss the action. Understanding our biology can be similar. Measuring one kind of data — such as which genes are being expressed — can be informative, but it only captures one facet of a multifaceted story. For many biological processes and disease mechanisms, the entire “plot” can’t be fully understood without combining data types.
However, capturing both the “visuals and sound” of biological data, such as gene expression and cell structure data, from the same cells requires researchers to develop new approaches. They also have to make sure that the data they capture accurately reflects what happens in living organisms, including how cells interact with each other and their environments.
Whitehead Institute for Biomedical Research and Harvard University researchers have taken on these challenges and developed Perturb-Multimodal (Perturb-Multi), a powerful new approach that simultaneously measures how genetic changes such as turning off individual genes affect both gene expression and cell structure in intact liver tissue. The method, described in Cell on June 12, aims to accelerate discovery of how genes control organ function and disease.
The research team, led by Whitehead Institute Member Jonathan Weissman and then-graduate student in his lab Reuben Saunders, along with Xiaowei Zhuang, the David B. Arnold Professor of Science at Harvard University, and then-postdoc in her lab Will Allen, created a system that can test hundreds of different genetic modifications within a single mouse liver while capturing multiple types of data from the same cells.
“Understanding how our organs work requires looking at many different aspects of cell biology at once,” Saunders says. “With Perturb-Multi, we can see how turning off specific genes changes not just what other genes are active, but also how proteins are distributed within cells, how cellular structures are organized, and where cells are located in the tissue. It’s like having multiple specialized microscopes all focused on the same experiment.”
“This approach accelerates discovery by both allowing us to test the functions of many different genes at once, and then for each gene, allowing us to measure many different functional outputs or cell properties at once — and we do that in intact tissue from animals,” says Zhuang, who is also a Howard Hughes Medical Institute (HHMI) investigator.
A more efficient approach to genetic studies
Traditional genetic studies in mice often turn off one gene and then observe what changes in that gene’s absence to learn about what the gene does. The researchers designed their approach to turn off hundreds of different genes across a single liver, while still only turning off one gene per cell — using what is known as a mosaic approach. This allowed them to study the roles of hundreds of individual genes at once in a single individual. The researchers then collected diverse types of data from cells across the same liver to get a full picture of the consequences of turning off the genes.
“Each cell serves as its own experiment, and because all the cells are in the same animal, we eliminate the variability that comes from comparing different mice,” Saunders says. “Every cell experiences the same physiological conditions, diet, and environment, making our comparisons much more precise.”
“The challenge we faced was that tissues, to perform their functions, rely on thousands of genes, expressed in many different cells, working together. Each gene, in turn, can control many aspects of a cell’s function. Testing these hundreds of genes in mice using current methods would be extremely slow and expensive — near impossible, in practice.” Allen says.
Revealing new biology through combined measurements
The team applied Perturb-Multi to study genetic controls of liver physiology and function. Their study led to discoveries in three important aspects of liver biology: fat accumulation in liver cells — a precursor to liver disease; stress responses; and hepatocyte zonation (how liver cells specialize, assuming different traits and functions, based on their location within the liver).
One striking finding emerged from studying genes that, when disrupted, cause fat accumulation in liver cells. The imaging data revealed that four different genes all led to similar fat droplet accumulation, but the sequencing data showed they did so through three completely different mechanisms.
“Without combining imaging and sequencing, we would have missed this complexity entirely,” Saunders says. “The imaging told us which genes affect fat accumulation, while the sequencing revealed whether this was due to increased fat production, cellular stress, or other pathways. This kind of mechanistic insight could be crucial for developing targeted therapies for fatty liver disease.”
The researchers also discovered new regulators of liver cell zonation. Unexpectedly, the newly discovered regulators include genes involved in modifying the extracellular matrix — the scaffolding between cells. “We found that cells can change their specialized functions without physically moving to a different zone,” Saunders says. “This suggests that liver cell identity is more flexible than previously thought.”
Technical innovation enables new science
Developing Perturb-Multi required solving several technical challenges. The team created new methods for preserving the content of interest in cells — RNA and proteins — during tissue processing, for collecting many types of imaging data and single-cell gene expression data from tissue samples that have been fixed with a preservative, and for integrating multiple types of data from the same cells.
“Overcoming the inherent complexity of biology in living animals required developing new tools that bridge multiple disciplines — including, in this case, genomics, imaging, and AI,” Allen says.
The two components of Perturb-Multi — the imaging and sequencing assays — together, applied to the same tissue, provide insights that are unattainable through either assay alone.
“Each component had to work perfectly while not interfering with the others,” says Weissman, who is also a professor of biology at MIT and an HHMI investigator. “The technical development took considerable effort, but the payoff is a system that can reveal biology we simply couldn’t see before.”
Expanding to new organs and other contexts
The researchers plan to expand Perturb-Multi to other organs, including the brain, and to study how genetic changes affect organ function under different conditions like disease states or dietary changes.
“We’re also excited about using the data we generate to train machine learning models,” adds Saunders. “With enough examples of how genetic changes affect cells, we could eventually predict the effects of mutations without having to test them experimentally — a ‘virtual cell’ that could accelerate both research and drug development.”
“Perturbation data are critical for training such AI models and the paucity of existing perturbation data represents a major hindrance in such ‘virtual cell’ efforts,” Zhuang says. “We hope Perturb-Multi will fill this gap by accelerating the collection of perturbation data.”
The approach is designed to be scalable, with the potential for genome-wide studies that test thousands of genes simultaneously. As sequencing and imaging technologies continue to improve, the researchers anticipate that Perturb-Multi will become even more powerful and accessible to the broader research community.
“Our goal is to keep scaling up. We plan to do genome-wide perturbations, study different physiological conditions, and look at different organs,” says Weissman. “That we can now collect so many types of data from so many cells, at speed, is going to be critical for building AI models like virtual cells, and I think it’s going to help us answer previously unsolvable questions about health and disease.”
© Image: Jennifer Cook Chrysos/Whitehead Institute
President Emeritus Reif reflects on successes as a technical leader
As an electrical engineering student at Stanford University in the late 1970s, L. Rafael Reif was working on not only his PhD but also learning a new language.
“I didn’t speak English. And I saw that it was easy to ignore somebody who doesn’t speak English well,” Reif recalled. To him, that meant speaking with conviction.
“If you have tremendous technical skills, but you cannot communicate, if you cannot persuade others to embrace that, it’s not going to go anywhere. Without the combination, you cannot persuade the powers-that-be to embrace whatever ideas you have.”
Now MIT president emeritus, Reif recently joined Anantha P. Chandrakasan, chief innovation and strategy officer and dean of the School of Engineering (SoE), for a fireside chat. Their focus: the importance of developing engineering leadership skills — such as persuasive communication — to solve the world’s most challenging problems.
SoE’s Technical Leadership and Communication Programs (TLC) sponsored the chat. TLC teaches engineering leadership, teamwork, and technical communication skills to students, from undergrads to postdocs, through its four programs: Undergraduate Practice Opportunities Program (UPOP), Gordon-MIT Engineering Leadership Program (GEL), Communication Lab (Comm Lab), and Riccio-MIT Graduate Engineering Leadership Program (GradEL).
About 175 students, faculty, and guests attended the fireside chat. Relaxed, engaging, and humorous — Reif shared anecdotes and insights about technical leadership from his decades in leadership roles at MIT.
Reif had a transformational impact on MIT. Beginning as an assistant professor of electrical engineering in 1980, he rose to head of the Department of Electrical Engineering and Computer Science (EECS), then served as provost from 2005 to 2012 and MIT president from 2012 to 2022.
He was instrumental in creating the MIT Schwarzman College of Computing in 2018, as well as establishing and growing MITx online open learning and MIT Microsystems Technology Laboratories.
With an ability to peer over the horizon and anticipate what’s coming, Reif used an array of leadership skills to develop and implement clear visions for those programs.
“One of the things that I learned from you is that as a leader, you have to envision the future and make bets,” said Chandrakasan. “And you don’t just wait around for that. You have to drive it.”
Turning new ideas into reality often meant overcoming resistance. When Reif first proposed the College of Computing to some fellow MIT leaders, “they looked at me and they said, no way. This is too hard. It’s not going to happen. It’s going to take too much money. It’s too complicated. OK, then starts the argument.”
Reif seems to have relished “the argument,” or art of persuasion, during his time at MIT. Though hearing different perspectives never hurt.
“All of us have blind spots. I always try to hear all points of view. Obviously, you can’t integrate all of it. You might say, ‘Anantha, I heard you, but I disagree with you because of this.’ So, you make the call knowing all the options. That is something non-technical that I used in my career.”
On the technical side, Reif’s background as an electrical engineer shaped his approach to leadership.
“What’s beautiful about a technical education is that you understand that you can solve anything if you start with first principles. There are first principles in just about anything that you do. If you start with those, you can solve any problem.”
Also, applying systems-level thinking is critical — understanding that organizations are really systems with interconnected parts.
“That was really useful to me. Some of you in the audience have studied this. In a system, when you start tinkering with something over here, something over there will be affected. And you have to understand that. At a place like MIT, that’s all the time!”
Reif was asked: If he were assembling a dream team to tackle the world’s biggest challenges, what skills or capabilities would he want them to have?
“I think we need people who can see things from different directions. I think we need people who are experts in different disciplines. And I think we need people who are experts in different cultures. Because to solve the big problems of the planet, we need to understand how different cultures address different things.”
Reif’s upbringing in Venezuela strongly influenced his leadership approach, particularly when it comes to empathy, a key trait he values.
“My parents were immigrants. They didn’t have an education, and they had to do whatever they could to support the family. And I remember as a little kid seeing how people humiliated them because they were doing menial jobs. And I remember how painful it was to me. It is part of my fabric to respect every individual, to notice them. I have a tremendous respect for every individual, and for the ability of every individual that didn’t have the same opportunity that all of us here have to be somebody.”
Reif’s advice to students who will be the next generation of engineering leaders is to keep learning because the challenges ahead are multidisciplinary. He also reminded them that they are the future.
“What are our assets? The people in this room. When it comes to the ecosystem of innovation in America, what we work on is to create new roadmaps, expand the roadmaps, create new industries. Without that, we have nothing. Companies do a great job of taking what you come up with and making wonderful things with it. But the ideas, whether it’s AI, whether it’s deep learning, it comes from places like this.”
© Photo: Tony Hu
Inspiring student growth
Professors Xiao Wang and Rodrigo Verdi, both members of the 2023-25 Committed to Caring cohort, are aiding in the development of extraordinary researchers and contributing to a collaborative culture.
“Professor Xiao Wang's caring efforts have a profound impact on the lives of her students,” one of her advisees commended.
“Rodrigo's dedication to mentoring and his unwavering support have positively impacted every student in our group,” another student praised.
For MIT graduate students, the Committed to Caring program recognizes those who go above and beyond.
Xiao Wang: Enriching, stimulating, and empowering students
Xiao Wang is a core institute member of the Broad Institute of MIT and Harvard and an associate professor in the Department of Chemistry at MIT. She started her lab in 2019 to develop and apply new chemical, biophysical, and genomic tools to better understand tissue function and dysfunction at the molecular level.
Wang goes above and beyond to create a nurturing environment that fosters growth and supports her students' personal and academic development. She makes it a priority to ensure an intellectually stimulating environment, taking the time to discuss research interests, academic goals, and personal aspirations on a weekly basis.
In their nominations, her students emphasized that Wang understands the importance of mentorship, patiently explaining fundamental concepts, sharing insights from her own groundbreaking work, and providing her students with key scientific papers and resources to deepen their understanding of the field.
“Professor Wang encouraged me to think critically, ask challenging questions, and explore innovative approaches to further my research,” one of her students commented.
Beyond the lab, Wang nurtures a sense of community among her research team. Her regular lab meetings are highly valued by her students, where “fellow researchers presented … findings, exchanged ideas, and received constructive feedback.”
These meetings foster collaboration, enhance communication skills, and create a supportive environment where all lab members feel empowered to share their discoveries and insights.
Wang is a dedicated and compassionate educator, and is known for her unwavering commitment to the well-being and success of her students. Her advisees not only excel academically but they also develop resilience, confidence, and a sense of belonging.
A different student reflected that although they came from an organic chemistry background with few skills related to the chemical biology field, Wang recognized their enthusiasm and potential. She went out of her way to make sure they could have a smooth transition. “It is because of all her training and help that I came from knowing nothing about the field to being able to confidently call myself a chemical biologist,” the student acclaimed.
Her advisees communicate that Wang encourages them to present their work at conferences, workshops, and seminars. This helps boost the students’ confidence and establish connections within the scientific community.
“Her genuine care and dedication make her a cherished mentor and a source of inspiration for all who have the privilege to learn from her,” one of her mentees remarked.
Rodrigo Verdi: Committed and collaborative
Professor Rodrigo Verdi is the deputy dean of degree programs and teaching and learning at the MIT Sloan School of Management. Verdi’s research provides insights into the role of accounting information in corporate finance decisions and in capital markets behavior.
Professor Verdi has been active in the majority of the Sloan students’ research journeys. He makes sure to assist students even if he does not directly guide them. One student states that “although Rodrigo is not my primary advisor, he still goes above and beyond to provide feedback and assistance.”
Verdi believes that “an appetite for experimentation, the ability to handle failure, and managing the stress along the way” is the kind of support necessary for especially innovative research.
Another student recounts that they “cannot think of a single recent graduate since … [they] started the PhD program that did not have Rodrigo on their committee.” This demonstrates how much students value his guidance, and how much he cares about their success.
Since his arrival at MIT, he has shown a strong commitment to mentoring students. Despite his many responsibilities as an associate dean, Rodrigo remains highly accessible to students and eagerly engages with them.
Specifically, Verdi has interacted with more than 90 percent of recent graduates over the past 10 years, contributing significantly to the department’s strong track record in job placements. He has served on the dissertation committee for 18 students in the last 15 years, which represents nearly all of the students in the department.
A student remarked that “Rodrigo has been an exceptional advisor during my job market period, which is known for its high levels of stress.” He offered continuous encouragement and support, making himself available for discussions whenever the student faced challenges.
After each job market interview, Verdi and the student would debrief and discuss areas for improvement. His insights into the academic system, the significance of social skills and networking, and his valuable advice helped the student successfully get a faculty position.
Rodrigo’s mantra is, “people won't care how much you know until they know how much you care,” and his relationships with his students support this maxim.
Verdi has made a lasting impact on the culture of the accounting specialty and is an important piece of the puzzle with regard to interactions found in the Sloan school. One of his students praised, “the collaborative culture is impressive: I’d call it a family, where faculty and students are very close to each other.” They described that they “share the same office space, have lunches together, and whenever students want feedback, the faculty is willing to help.”
Verdi has sharp research insights, and always wants to help, even when he is swamped with administrative affairs. He makes himself accessible to students, often staying after hours with his door open.
Another mentee said that “he has been organizing weekly PhD lunch seminars for years, online brown-bags among current and previous MIT accounting members during the pandemic, and more recently the annual MIT accounting alumni conference.” Verdi also takes students out for dinner or coffee, caring about how they are doing outside of academics. The student commended, “I feel lucky that Rodrigo is here.”
© Photo: Gretchen Ertl
Accelerating scientific discovery with AI
Several researchers have taken a broad view of scientific progress over the last 50 years and come to the same troubling conclusion: Scientific productivity is declining. It’s taking more time, more funding, and larger teams to make discoveries that once came faster and cheaper. Although a variety of explanations have been offered for the slowdown, one is that, as research becomes more complex and specialized, scientists must spend more time reviewing publications, designing sophisticated experiments, and analyzing data.
Now, the philanthropically funded research lab FutureHouse is seeking to accelerate scientific research with an AI platform designed to automate many of the critical steps on the path toward scientific progress. The platform is made up of a series of AI agents specialized for tasks including information retrieval, information synthesis, chemical synthesis design, and data analysis.
FutureHouse founders Sam Rodriques PhD ’19 and Andrew White believe that by giving every scientist access to their AI agents, they can break through the biggest bottlenecks in science and help solve some of humanity’s most pressing problems.
“Natural language is the real language of science,” Rodriques says. “Other people are building foundation models for biology, where machine learning models speak the language of DNA or proteins, and that’s powerful. But discoveries aren’t represented in DNA or proteins. The only way we know how to represent discoveries, hypothesize, and reason is with natural language.”
Finding big problems
For his PhD research at MIT, Rodriques sought to understand the inner workings of the brain in the lab of Professor Ed Boyden.
“The entire idea behind FutureHouse was inspired by this impression I got during my PhD at MIT that even if we had all the information we needed to know about how the brain works, we wouldn’t know it because nobody has time to read all the literature,” Rodriques explains. “Even if they could read it all, they wouldn’t be able to assemble it into a comprehensive theory. That was a foundational piece of the FutureHouse puzzle.”
Rodriques wrote about the need for new kinds of large research collaborations as the last chapter of his PhD thesis in 2019, and though he spent some time running a lab at the Francis Crick Institute in London after graduation, he found himself gravitating toward broad problems in science that no single lab could take on.
“I was interested in how to automate or scale up science and what kinds of new organizational structures or technologies would unlock higher scientific productivity,” Rodriques says.
When Chat-GPT 3.5 was released in November 2022, Rodriques saw a path toward more powerful models that could generate scientific insights on their own. Around that time, he also met Andrew White, a computational chemist at the University of Rochester who had been granted early access to Chat-GPT 4. White had built the first large language agent for science, and the researchers joined forces to start FutureHouse.
The founders started out wanting to create distinct AI tools for tasks like literature searches, data analysis, and hypothesis generation. They began with data collection, eventually releasing PaperQA in September 2024, which Rodriques calls the best AI agent in the world for retrieving and summarizing information in scientific literature. Around the same time, they released Has Anyone, a tool that lets scientists determine if anyone has conducted specific experiments or explored specific hypotheses.
“We were just sitting around asking, ‘What are the kinds of questions that we as scientists ask all the time?’” Rodriques recalls.
When FutureHouse officially launched its platform on May 1 of this year, it rebranded some of its tools. Paper QA is now Crow, and Has Anyone is now called Owl. Falcon is an agent capable of compiling and reviewing more sources than Crow. Another new agent, Phoenix, can use specialized tools to help researchers plan chemistry experiments. And Finch is an agent designed to automate data driven discovery in biology.
On May 20, the company demonstrated a multi-agent scientific discovery workflow to automate key steps of the scientific process and identify a new therapeutic candidate for dry age-related macular degeneration (dAMD), a leading cause of irreversible blindness worldwide. In June, FutureHouse released ether0, a 24B open-weights reasoning model for chemistry.
“You really have to think of these agents as part of a larger system,” Rodriques says. “Soon, the literature search agents will be integrated with the data analysis agent, the hypothesis generation agent, an experiment planning agent, and they will all be engineered to work together seamlessly.”
Agents for everyone
Today anyone can access FutureHouse’s agents at platform.futurehouse.org. The company’s platform launch generated excitement in the industry, and stories have started to come in about scientists using the agents to accelerate research.
One of FutureHouse’s scientists used the agents to identify a gene that could be associated with polycystic ovary syndrome and come up with a new treatment hypothesis for the disease. Another researcher at the Lawrence Berkeley National Laboratory used Crow to create an AI assistant capable of searching the PubMed research database for information related to Alzheimer’s disease.
Scientists at another research institution have used the agents to conduct systematic reviews of genes relevant to Parkinson’s disease, finding FutureHouse’s agents performed better than general agents.
Rodriques says scientists who think of the agents less like Google Scholar and more like a smart assistant scientist get the most out of the platform.
“People who are looking for speculation tend to get more mileage out of Chat-GPT o3 deep research, while people who are looking for really faithful literature reviews tend to get more out of our agents,” Rodriques explains.
Rodriques also thinks FutureHouse will soon get to a point where its agents can use the raw data from research papers to test the reproducibility of its results and verify conclusions.
In the longer run, to keep scientific progress marching forward, Rodriques says FutureHouse is working on embedding its agents with tacit knowledge to be able to perform more sophisticated analyses while also giving the agents the ability to use computational tools to explore hypotheses.
“There have been so many advances around foundation models for science and around language models for proteins and DNA, that we now need to give our agents access to those models and all of the other tools people commonly use to do science,” Rodriques says. “Building the infrastructure to allow agents to use more specialized tools for science is going to be critical.”
© Credit: Christine Daniloff, MIT; iStock
A driving force in medicine
Fei Wang named senior faculty fellow in clinical AI at Cornell Tech
Immune tolerance to gut microbes is initiated by a key bacterial sensor
Research at risk: stopping metastatic cancer
Summer Reads 2025: Princeton professors share what's on their lists
Eight new members elected to Princeton Board of Trustees
When trash becomes a universe
When trash becomes a universe
Bottle caps found on the Australian coast.
© TRES [ilana boltvinik + rodrigo viñas], photo illustration by Liz Zonarich/Harvard staff
Sy Boles
Harvard Staff Writer
Artist collective brings ‘intraterrestrial’ worlds to Peabody Museum
The bottle caps washed up along the beaches of Australia looking almost like miniature planets. Some looked like flat, hard planets made of marble; others looked watery and remarkably like Earth. Many of them had been colonized and transformed by aquatic invertebrates called bryozoans.
The peculiar sea trash caught the imagination of the art collective TRES and formed the backbone of their exhibit, “Castaway: The Afterlife of Plastic,” now on display at Harvard’s Peabody Museum of Archaeology & Ethnology. Over a 2½-month road trip in a Toyota camper van, the Mexico City-based duo Ilana Boltvinik and Rodrigo Viñas photographed the bottle caps — as well as soda cans, shoe leather, plastic doll parts, deodorant containers, and rubber gloves — they found washed up along the Australian coast.

“Even our debris has become a platform for other types of life.”
– Ilana Boltvinik
TRES is not new to finding beauty in what others might overlook: Previous projects have featured used chewing gum scraped off the streets of Mexico City, cigarette butts, and even found bottles full of urine. “One of our main concerns is to try to offer a different perspective on trash,” said Viñas. “We’re proposing a more intimate relationship with our residues.”
The inspiration for “Castaway” came during a previous project, “Ubiquitous Trash,” which collected and examined trash collected in Hong Kong. They found bottle caps printed with the image of the Hong Kong actor and celebrity chef Nicholas Tse, but that type of bottle was only available in mainland China, Boltvinik said.
“One of the questions we had at the beginning, because we like following the traces of things, was, ‘OK, this is the bottle cap, where is the rest of the bottle?’ They’re probably at the bottom of the ocean. That made us think of bottle caps as the tips of icebergs that are a small part of a very large story.”

Calcium deposits are visible in this bottle cap in “From the Future to the Present.”
© TRES [iIana boltvinik + rodrigo viñas]
The pair expected to find beautiful bottle caps on their Australian road trip, but they were surprised by the strange, coral-like substance they found growing on and inside them. The substance sometimes carved holes in the plastic or turned its surface into entirely new shapes and textures that looked like the surfaces of alien worlds. They consulted Paul Taylor, an invertebrate paleontologist and bryozoologist at the Natural History Museum in London, who identified the growths as the calcium deposits of jellyella eburnea, a species in the phylum Bryozoa.
Bryozoans are microscopic invertebrates that work together to build the elaborate calcium-based structures that TRES encountered. Bryozoans are known for the division of labor within their colonies. Some of them filter water; others specialize in reproduction; still others construct their homes.
“It’s another universe, it’s amazing,” Boltvinik said.

Trees in Yallingup Beach, Western Australia, resemble found rope in “Parallel Lives I.”
© TRES [ilana boltvinik + rodrigo viñas]

Found plastic rope resembles a tree in Yallingup Beach, Western Australia, in “Parallel Lives II.”
© TRES [ilana boltvinik + rodrigo viñas]
The exhibit invites viewers to break down the barriers between natural and unnatural, valuable and disposable, good and bad. After all, plastic has become new home worlds for an “intraterrestrial” life form, as TRES put it, and that life form has terraformed those worlds in its image.
The exhibit is the result of the Robert Gardner Fellowship in Photography, a Peabody Museum effort that funds established artists to create and publish a major work of photography “on the human condition anywhere in the world.” TRES received the fellowship in 2016.
Ilisa Barbash, curator of visual anthropology at the Peabody Museum and the curator of “Castaway,” said the pieces raise a question that often comes up in her field.
“It’s always a problem in anthropology, the aesthetics, especially when you’re dealing with difficult topics — trauma or war or garbage. What if the pictures are beautiful?”

“Intraterrestial Aliens: Forgotten Smell.”
© TRES [ilana boltvinik + rodrigo viñas]

“Things in a Forgotten Map I.”
© TRES [ilana boltvinik + rodrigo viñas]
The exhibition also draws connections to Harvard’s scientific history. Alexander Agassiz, son of Louis Agassiz, who founded Harvard’s Museum of Comparative Zoology, led expeditions to Australia in the same region that TRES explored. The Peabody Museum collaborated with the Museum of Comparative Zoology’s Ernst Mayr Library to include actual jellyella eburnea structures from Australia in the exhibit.
“We are not in control of everything,” Boltvinik said. “Even our debris has become a platform for other types of life. It’s not that it was ever designed for something like that, but the world is bigger than humans, and things that happen in the world are bigger than humans.”
“Castaway: The Afterlife of Plastic” is on display through April 6 at the Harvard’s Peabody Museum of Archaeology & Ethnology.
Faces of MIT: Ylana Lopez
Ylana Lopez oversees programs and events at the Martin Trust Center for MIT Entrepreneurship. The Trust Center offers more than 60 entrepreneurship and innovation courses across campus, a dedicated entrepreneurship and innovation track for students pursuing their MBA, online courses for self-learners at MIT and around the globe, and programs for people both affiliated and not affiliated with the Institute. As assistant director, academics and events, at the Trust Center, Lopez leads an array of programs and events, while also assisting students and faculty members.
After graduating from Rutgers University, Lopez conducted research in human-computer interaction at Princeton University. After Princeton, she worked for the health care software company Epic Systems, in quality management and user experience. While at Epic Systems, she was simultaneously working on a startup with two of her friends, Kiran Sharma and Dinuri Rupasinghe. One of the startup co-founders, who was an MIT undergraduate student, applied for them to take part in the Trust Center’s flagship startup accelerator delta v, and the trio was accepted.
Delta v is a highly competitive entrepreneurial program, with 20 to 25 startup teams accepted each year, which runs annually from June to August. At the end of each month, there is a mock board meeting with a board of advisors consisting of industry experts specifically curated to support each startup team’s goals. Programming, coaching sessions, workshops, lectures, and pitch practices take place throughout delta v, and the program culminates in September with a demo day in Kresge Auditorium with thousands of people in attendance.
Prior to delta v, Lopez decided to leave her full-time job to focus solely on the startup. Once she and her partners went their separate ways, she was looking for a career change, which led her to reflect on her formative summer at MIT. In spring 2023, Lopez applied for an open position at the Trust Center to be an academic coordinator. Soon after, she was offered and accepted the role, and a year later was promoted to assistant director for academics and events. Lopez’s time at MIT has come full circle as her current position includes being a co-director of delta v. Like many of her colleagues who are serial entrepreneurs, Lopez has also started a design studio on the side in the past year called Mr. Mango, providing creative design services for film and music industries.
Lopez has always loved education and planned to become a teacher before deciding to enter the field of technology. Because of this, she describes working at MIT, and being a staff member in the Trust Center, as having the best of both worlds. While delta v is the flagship accelerator, Lopez also supports shorter programs including MIT Fuse, a three-week, hands-on startup sprint that takes place during Independent Activities Period (IAP), and t=0, a festival of events that kicks off each school year to promote entrepreneurship at MIT. In addition to delta v, other programs are available to those outside of MIT, as the Trust Center sees the value of bringing together an ecosystem that is not solely composed of those at the Institute.
At the core of the Trust Center is the belief that entrepreneurship is a tool to change the world. The staff also believe entrepreneurship can be taught, and is not just for a select few. Lopez and her colleagues are highly collaborative and work in an office space that they affectionately call “the bullpen.” The office layout and shared nature of their work mean that no one is a stranger. With at least two events per week, late nights can turn into early mornings, but Lopez and her colleagues love what they do. She is grateful for the growth she has had in her time at the Trust Center and the opportunity to be a part of a motivated, fun, and talented team.
Trust Center managing director Bill Aulet, the Ethernet Inventors Professor of the Practice of Entrepreneurship, cannot sing Lopez’s praises enough. “In my now almost two decades running this center, I have never seen anyone better at really understanding the students, our customers, and translating that back into high-quality and creative programs that delight them and serve the mission of our center, MIT Sloan, and MIT more broadly. We are so fortunate to have her.”
Soundbytes
Q: What is your favorite project that you have worked on?
A: This semester we piloted the Martin Trust Center Startup Pass. It is an opportunity for startups, regardless of what stage they are in, to have a daily, dedicated workspace at the Trust Center to make progress on their ventures. We set aside half of our space for what we call “the beehive” for startups to work alongside other founders and active builders at MIT. It’s great for students to sit alongside people who are building awesome things and will provide feedback, offer support, and really build a community that is entirely based off the spirit and collaboration that naturally comes to entrepreneurs. Entrepreneurship can be lonely; therefore, a lot of our efforts go toward helping build networks that make it less so. In just one semester, we’ve already created a community of over 80 founders across MIT!
I’m also excited about revamping one of our rooms into a creative studio. We noticed that startups could benefit from having a space that has capabilities for creating content like podcasts, photography, videography, and other types of creative work. Those things are important in entrepreneurship, so we are currently cultivating a space that any entrepreneur at MIT can utilize.
Q: How would you describe the MIT community?
A: We have such a wonderful community here. The Trust Center supports all of MIT, so we have many programs that allow us to see a lot of people. There can be silos, so it’s great that we bring people together, regardless of their backgrounds, experience, or interests, in one place to become entrepreneurs. The MIT community is a group of inspiring, passionate people who are very welcoming. It’s a very exciting community to be a part of.
Q: What advice would you give someone who is starting a job at MIT?
A: If your day-to-day is typically in one office or setting, over time it can be easy to find yourself in a bubble. I highly recommend breaking out of your bubble by making the effort to meet as many people outside of the group that you work with directly as possible. I have met a number of people across different departments, even if we don’t have much direct overlap in terms of work, and they have been incredibly helpful, gracious, and welcoming. You never know if an introductory or impromptu conversation with someone might lead to an awesome collaboration or new initiative. It’s great being in a community with so many talented people.
© Photo courtesy of Ylana Lopez.
MIT and Mass General Brigham launch joint seed program to accelerate innovations in health
Leveraging the strengths of two world-class research institutions, MIT and Mass General Brigham (MGB) recently celebrated the launch of the MIT-MGB Seed Program. The new initiative, which is supported by Analog Devices Inc. (ADI), will fund joint research projects led by researchers at MIT and Mass General Brigham. These collaborative projects will advance research in human health, with the goal of developing next-generation therapies, diagnostics, and digital tools that can improve lives at scale.
The program represents a unique opportunity to dramatically accelerate innovations that address some of the most urgent challenges in human health. By supporting interdisciplinary teams from MIT and Mass General Brigham, including both researchers and clinicians, the seed program will foster groundbreaking work that brings together expertise in artificial intelligence, machine learning, and measurement and sensing technologies with pioneering clinical research and patient care.
“The power of this program is that it combines MIT’s strength in science, engineering, and innovation with Mass General Brigham’s world-class scientific and clinical research. With the support and incentive to work together, researchers and clinicians will have the freedom to tackle compelling problems and find novel ways to overcome them to achieve transformative changes in patient care,” says Sally Kornbluth, president of MIT.
“The MIT-MGB Seed Program will enable cross-disciplinary collaboration to advance transformative research and breakthrough science. By combining the collective strengths and expertise of our great institutions, we can transform medical care and drive innovation and discovery with speed,” says Anne Klibanski, president and CEO of Mass General Brigham.
The initiative is funded by a gift from ADI. Over the next three years, the ADI Fund for Health and Life Sciences will support approximately six joint projects annually, with funding split between the two institutions.
“The converging domains of biology, medicine, and computing promise a new era of health-care efficacy, efficiency, and access. ADI has enjoyed a long and fruitful history of collaboration with MIT and Mass General Brigham, and we are excited by this new initiative’s potential to transform the future of patient care,” adds Vincent Roche, CEO and chair of the board of directors at ADI.
In addition to funding, teams selected for the program will have access to entrepreneurial workshops, including some hosted by The Engine — an MIT-built venture firm focused on tough tech. These sessions will connect researchers with company founders, investors, and industry leaders, helping them chart a path from breakthrough discoveries in the lab to real-world impact.
The program will launch an open call for proposals to researchers at MIT and Mass General Brigham. The first cohort of funded projects is expected to launch in fall 2025. Awardees will be selected by a joint review committee composed of MIT and Mass General Brigham experts.
According to MIT’s faculty lead for the MIT-MGB Seed Program, Alex K. Shalek, building collaborative research teams with leaders from both institutions could help fill critical gaps that often impede innovation in health and life sciences. Shalek also serves as director of the Institute for Medical Engineering & Science (IMES), the J. W. Kieckhefer Professor in IMES and Chemistry, and an extramural member of the Koch Institute for Integrative Cancer Research.
“Clinicians often see where current interventions fall short, but may lack the scientific tools or engineering expertise needed to develop new ones. Conversely, MIT researchers may not fully grasp these clinical challenges or have access to the right patient data and samples,” explains Shalek, who is also a member of the Ragon Institute of Mass General Brigham, MIT, and Harvard. “By supporting bilateral collaborations and building a community across disciplines, this program is poised to drive critical advances in diagnostics, therapeutics, and AI-driven health applications.”
Emery Brown, a practicing anesthesiologist at Massachusetts General Hospital, will serve alongside Shalek as Mass General Brigham’s faculty lead for the program.
“The MIT-MGB Seed Program creates a perfect storm. The program will provide an opportunity for MIT faculty to bring novel science and engineering to attack and solve important clinical problems,” adds Brown, who is also the Edward Hood Taplin Professor of Medical Engineering and Computational Neuroscience at MIT. “The pursuit of solutions to important and challenging clinical problems by Mass General Brigham physicians and scientists will no doubt spur MIT scientists and engineers to develop new technologies, or find novel applications of existing technologies.”
The MIT-MGB Seed Program is a flagship initiative in the MIT Health and Life Sciences Collaborative (MIT HEALS). It reflects MIT HEALS’ core mission to establish MIT as a central hub for health and life sciences innovation and translation, and to leverage connections with other world-class research institutions in the Boston area.
“This program exemplifies the power of interdisciplinary research,” says Anantha Chandrakasan, MIT’s chief innovation and strategy officer, dean of engineering, and head of MIT HEALS. “It creates a critical bridge between clinical practice and technological innovation — two areas that must be deeply connected to advance real-world solutions.”
The program’s launch was celebrated at a special event at MIT’s Samberg Conference Center on March 31.
© Photo: Gretchen Ertl
Using generative AI to help robots jump higher and land safely
Diffusion models like OpenAI’s DALL-E are becoming increasingly useful in helping brainstorm new designs. Humans can prompt these systems to generate an image, create a video, or refine a blueprint, and come back with ideas they hadn’t considered before.
But did you know that generative artificial intelligence (GenAI) models are also making headway in creating working robots? Recent diffusion-based approaches have generated structures and the systems that control them from scratch. With or without a user’s input, these models can make new designs and then evaluate them in simulation before they’re fabricated.
A new approach from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) applies this generative know-how toward improving humans’ robotic designs. Users can draft a 3D model of a robot and specify which parts they’d like to see a diffusion model modify, providing its dimensions beforehand. GenAI then brainstorms the optimal shape for these areas and tests its ideas in simulation. When the system finds the right design, you can save and then fabricate a working, real-world robot with a 3D printer, without requiring additional tweaks.
The researchers used this approach to create a robot that leaps up an average of roughly 2 feet, or 41 percent higher than a similar machine they created on their own. The machines are nearly identical in appearance: They’re both made of a type of plastic called polylactic acid, and while they initially appear flat, they spring up into a diamond shape when a motor pulls on the cord attached to them. So what exactly did AI do differently?
A closer look reveals that the AI-generated linkages are curved, and resemble thick drumsticks (the musical instrument drummers use), whereas the standard robot’s connecting parts are straight and rectangular.
Better and better blobs
The researchers began to refine their jumping robot by sampling 500 potential designs using an initial embedding vector — a numerical representation that captures high-level features to guide the designs generated by the AI model. From these, they selected the top 12 options based on performance in simulation and used them to optimize the embedding vector.
This process was repeated five times, progressively guiding the AI model to generate better designs. The resulting design resembled a blob, so the researchers prompted their system to scale the draft to fit their 3D model. They then fabricated the shape, finding that it indeed improved the robot’s jumping abilities.
The advantage of using diffusion models for this task, according to co-lead author and CSAIL postdoc Byungchul Kim, is that they can find unconventional solutions to refine robots.
“We wanted to make our machine jump higher, so we figured we could just make the links connecting its parts as thin as possible to make them light,” says Kim. “However, such a thin structure can easily break if we just use 3D printed material. Our diffusion model came up with a better idea by suggesting a unique shape that allowed the robot to store more energy before it jumped, without making the links too thin. This creativity helped us learn about the machine’s underlying physics.”
The team then tasked their system with drafting an optimized foot to ensure it landed safely. They repeated the optimization process, eventually choosing the best-performing design to attach to the bottom of their machine. Kim and his colleagues found that their AI-designed machine fell far less often than its baseline, to the tune of an 84 percent improvement.
The diffusion model’s ability to upgrade a robot’s jumping and landing skills suggests it could be useful in enhancing how other machines are designed. For example, a company working on manufacturing or household robots could use a similar approach to improve their prototypes, saving engineers time normally reserved for iterating on those changes.
The balance behind the bounce
To create a robot that could jump high and land stably, the researchers recognized that they needed to strike a balance between both goals. They represented both jumping height and landing success rate as numerical data, and then trained their system to find a sweet spot between both embedding vectors that could help build an optimal 3D structure.
The researchers note that while this AI-assisted robot outperformed its human-designed counterpart, it could soon reach even greater new heights. This iteration involved using materials that were compatible with a 3D printer, but future versions would jump even higher with lighter materials.
Co-lead author and MIT PhD student and CSAIL affiliate Tsun-Hsuan “Johnson” Wang says the project is a jumping-off point for new robotics designs that generative AI could help with.
“We want to branch out to more flexible goals,” says Wang. “Imagine using natural language to guide a diffusion model to draft a robot that can pick up a mug, or operate an electric drill.”
Kim says that a diffusion model could also help to generate articulation and ideate on how parts connect, potentially improving how high the robot would jump. The team is also exploring the possibility of adding more motors to control which direction the machine jumps and perhaps improve its landing stability.
The researchers’ work was supported, in part, by the National Science Foundation’s Emerging Frontiers in Research and Innovation program, the Singapore-MIT Alliance for Research and Technology’s Mens, Manus and Machina program, and the Gwangju Institute of Science and Technology (GIST)-CSAIL Collaboration. They presented their work at the 2025 International Conference on Robotics and Automation.
© Image courtesy of MIT CSAIL
MIT and Mass General Brigham launch joint seed program to accelerate innovations in health
Leveraging the strengths of two world-class research institutions, MIT and Mass General Brigham (MGB) recently celebrated the launch of the MIT-MGB Seed Program. The new initiative, which is supported by Analog Devices Inc. (ADI), will fund joint research projects led by researchers at MIT and Mass General Brigham. These collaborative projects will advance research in human health, with the goal of developing next-generation therapies, diagnostics, and digital tools that can improve lives at scale.
The program represents a unique opportunity to dramatically accelerate innovations that address some of the most urgent challenges in human health. By supporting interdisciplinary teams from MIT and Mass General Brigham, including both researchers and clinicians, the seed program will foster groundbreaking work that brings together expertise in artificial intelligence, machine learning, and measurement and sensing technologies with pioneering clinical research and patient care.
“The power of this program is that it combines MIT’s strength in science, engineering, and innovation with Mass General Brigham’s world-class scientific and clinical research. With the support and incentive to work together, researchers and clinicians will have the freedom to tackle compelling problems and find novel ways to overcome them to achieve transformative changes in patient care,” says Sally Kornbluth, president of MIT.
“The MIT-MGB Seed Program will enable cross-disciplinary collaboration to advance transformative research and breakthrough science. By combining the collective strengths and expertise of our great institutions, we can transform medical care and drive innovation and discovery with speed,” says Anne Klibanski, president and CEO of Mass General Brigham.
The initiative is funded by a gift from ADI. Over the next three years, the ADI Fund for Health and Life Sciences will support approximately six joint projects annually, with funding split between the two institutions.
“The converging domains of biology, medicine, and computing promise a new era of health-care efficacy, efficiency, and access. ADI has enjoyed a long and fruitful history of collaboration with MIT and Mass General Brigham, and we are excited by this new initiative’s potential to transform the future of patient care,” adds Vincent Roche, CEO and chair of the board of directors at ADI.
In addition to funding, teams selected for the program will have access to entrepreneurial workshops, including some hosted by The Engine — an MIT-built venture firm focused on tough tech. These sessions will connect researchers with company founders, investors, and industry leaders, helping them chart a path from breakthrough discoveries in the lab to real-world impact.
The program will launch an open call for proposals to researchers at MIT and Mass General Brigham. The first cohort of funded projects is expected to launch in fall 2025. Awardees will be selected by a joint review committee composed of MIT and Mass General Brigham experts.
According to MIT’s faculty lead for the MIT-MGB Seed Program, Alex K. Shalek, building collaborative research teams with leaders from both institutions could help fill critical gaps that often impede innovation in health and life sciences. Shalek also serves as director of the Institute for Medical Engineering & Science (IMES), the J. W. Kieckhefer Professor in IMES and Chemistry, and an extramural member of the Koch Institute for Integrative Cancer Research.
“Clinicians often see where current interventions fall short, but may lack the scientific tools or engineering expertise needed to develop new ones. Conversely, MIT researchers may not fully grasp these clinical challenges or have access to the right patient data and samples,” explains Shalek, who is also a member of the Ragon Institute of Mass General Brigham, MIT, and Harvard. “By supporting bilateral collaborations and building a community across disciplines, this program is poised to drive critical advances in diagnostics, therapeutics, and AI-driven health applications.”
Emery Brown, a practicing anesthesiologist at Massachusetts General Hospital, will serve alongside Shalek as Mass General Brigham’s faculty lead for the program.
“The MIT-MGB Seed Program creates a perfect storm. The program will provide an opportunity for MIT faculty to bring novel science and engineering to attack and solve important clinical problems,” adds Brown, who is also the Edward Hood Taplin Professor of Medical Engineering and Computational Neuroscience at MIT. “The pursuit of solutions to important and challenging clinical problems by Mass General Brigham physicians and scientists will no doubt spur MIT scientists and engineers to develop new technologies, or find novel applications of existing technologies.”
The MIT-MGB Seed Program is a flagship initiative in the MIT Health and Life Sciences Collaborative (MIT HEALS). It reflects MIT HEALS’ core mission to establish MIT as a central hub for health and life sciences innovation and translation, and to leverage connections with other world-class research institutions in the Boston area.
“This program exemplifies the power of interdisciplinary research,” says Anantha Chandrakasan, MIT’s chief innovation and strategy officer, dean of engineering, and head of MIT HEALS. “It creates a critical bridge between clinical practice and technological innovation — two areas that must be deeply connected to advance real-world solutions.”
The program’s launch was celebrated at a special event at MIT’s Samberg Conference Center on March 31.
© Photo: Gretchen Ertl
Using generative AI to help robots jump higher and land safely
Diffusion models like OpenAI’s DALL-E are becoming increasingly useful in helping brainstorm new designs. Humans can prompt these systems to generate an image, create a video, or refine a blueprint, and come back with ideas they hadn’t considered before.
But did you know that generative artificial intelligence (GenAI) models are also making headway in creating working robots? Recent diffusion-based approaches have generated structures and the systems that control them from scratch. With or without a user’s input, these models can make new designs and then evaluate them in simulation before they’re fabricated.
A new approach from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) applies this generative know-how toward improving humans’ robotic designs. Users can draft a 3D model of a robot and specify which parts they’d like to see a diffusion model modify, providing its dimensions beforehand. GenAI then brainstorms the optimal shape for these areas and tests its ideas in simulation. When the system finds the right design, you can save and then fabricate a working, real-world robot with a 3D printer, without requiring additional tweaks.
The researchers used this approach to create a robot that leaps up an average of roughly 2 feet, or 41 percent higher than a similar machine they created on their own. The machines are nearly identical in appearance: They’re both made of a type of plastic called polylactic acid, and while they initially appear flat, they spring up into a diamond shape when a motor pulls on the cord attached to them. So what exactly did AI do differently?
A closer look reveals that the AI-generated linkages are curved, and resemble thick drumsticks (the musical instrument drummers use), whereas the standard robot’s connecting parts are straight and rectangular.
Better and better blobs
The researchers began to refine their jumping robot by sampling 500 potential designs using an initial embedding vector — a numerical representation that captures high-level features to guide the designs generated by the AI model. From these, they selected the top 12 options based on performance in simulation and used them to optimize the embedding vector.
This process was repeated five times, progressively guiding the AI model to generate better designs. The resulting design resembled a blob, so the researchers prompted their system to scale the draft to fit their 3D model. They then fabricated the shape, finding that it indeed improved the robot’s jumping abilities.
The advantage of using diffusion models for this task, according to co-lead author and CSAIL postdoc Byungchul Kim, is that they can find unconventional solutions to refine robots.
“We wanted to make our machine jump higher, so we figured we could just make the links connecting its parts as thin as possible to make them light,” says Kim. “However, such a thin structure can easily break if we just use 3D printed material. Our diffusion model came up with a better idea by suggesting a unique shape that allowed the robot to store more energy before it jumped, without making the links too thin. This creativity helped us learn about the machine’s underlying physics.”
The team then tasked their system with drafting an optimized foot to ensure it landed safely. They repeated the optimization process, eventually choosing the best-performing design to attach to the bottom of their machine. Kim and his colleagues found that their AI-designed machine fell far less often than its baseline, to the tune of an 84 percent improvement.
The diffusion model’s ability to upgrade a robot’s jumping and landing skills suggests it could be useful in enhancing how other machines are designed. For example, a company working on manufacturing or household robots could use a similar approach to improve their prototypes, saving engineers time normally reserved for iterating on those changes.
The balance behind the bounce
To create a robot that could jump high and land stably, the researchers recognized that they needed to strike a balance between both goals. They represented both jumping height and landing success rate as numerical data, and then trained their system to find a sweet spot between both embedding vectors that could help build an optimal 3D structure.
The researchers note that while this AI-assisted robot outperformed its human-designed counterpart, it could soon reach even greater new heights. This iteration involved using materials that were compatible with a 3D printer, but future versions would jump even higher with lighter materials.
Co-lead author and MIT PhD student and CSAIL affiliate Tsun-Hsuan “Johnson” Wang says the project is a jumping-off point for new robotics designs that generative AI could help with.
“We want to branch out to more flexible goals,” says Wang. “Imagine using natural language to guide a diffusion model to draft a robot that can pick up a mug, or operate an electric drill.”
Kim says that a diffusion model could also help to generate articulation and ideate on how parts connect, potentially improving how high the robot would jump. The team is also exploring the possibility of adding more motors to control which direction the machine jumps and perhaps improve its landing stability.
The researchers’ work was supported, in part, by the National Science Foundation’s Emerging Frontiers in Research and Innovation program, the Singapore-MIT Alliance for Research and Technology’s Mens, Manus and Machina program, and the Gwangju Institute of Science and Technology (GIST)-CSAIL Collaboration. They presented their work at the 2025 International Conference on Robotics and Automation.
© Image courtesy of MIT CSAIL
ETH Alumni: Jeannine Pilloud hands over to Ruedi Hofer
Award for space researcher Thomas Zurbuchen
Face-to-face with Es Devlin
Es Devlin, the winner of the 2025 Eugene McDermott Award in the Arts at MIT, creates settings for people to gather — whether it’s a few people in a room or crowds swelling a massive stadium — arenas in which to dissolve one’s individual sense of self into the greater collective. She herself contains multitudes; equally at home with 17th century metaphysical English poet John Donne, 21st century icon of music and fashion Lady Gaga, or Italian theoretical physicist Carlo Rovelli.
In the course of the artist and designer’s three-decade career, Devlin has created an exploded paint interpretation of the U.K. flag for the Closing Ceremony of the 2012 London Olympics, a box of illuminated rainfall for a production of the Crucible, a 65-foot diameter AI-generated poetry pavilion for the World Expo, an indoor forest for the COP26 Climate Conference, a revolving luminous library for over 200,000 in Milan, Beyonce’s Renaissance tour, and two Super Bowl halftime shows. But Devlin also works on a much smaller scale: the human face. Her world-building is rooted in the earliest technologies of reading and drawing: the simple acts of the eye and hand.
For Congregation in 2024, she made chalk and charcoal drawings of 50 strangers. Before this project, Devlin says, she had most likely drawn around 50 portraits in total over the course of her practice — mostly family or friends, or the occasional covert sketch of a stranger on the subway. But drawing strangers required a different form of attention. “I was looking at another, who often looked different from me in many ways. Their skin pigmentation might be different, the orientation of their nose, eyes, and forehead might be other to what I was used to seeing in the mirror, and I was fraught with anxiety and concern to do them justice, and at pains not to offend,” she recalls.
As she drew, she warded off the desire to please, feeling her unconscious biases surface, but eventually, in this wordless space, found herself in intense communion. “I gradually became absorbed in each person's eyes. It felt like falling into a well, but knowing I was held by an anchor, that I would be drawn out,” she says, “In each case, I thought, ‘well, this is it. Here we are. This is the answer to everything, the continuity between me and the other.’” She calls each sitter a co-creator of the piece.
Devlin’s project inspired a series of drawing sessions at MIT, where students, faculty, and staff across the Institute — without any prior drawing experience necessary — were paired with strangers and asked to draw each other in silence for five minutes. In these 11 sessions held over the course of the semester, participants practiced rendering a stranger’s features on the page, and then the sitter spoke and shared their story. There were no guidelines about what to say, or even how to draw — but the final product mattered less than the process, the act of being in another’s presence and looking deeply.
If pop concerts are the technology to transform private emotional truth into public feeling — the lyrics sung to the bathroom mirror now belted in choruses of thousands — Devlin finds that same stripped-down intimacy in all her works, asking us to bare the most elemental versions of ourselves.
“We’re in a moment where we’re really having a hard time speaking to one another. We wanted to find a way to take the lessons from the work that Es Devlin has done to practice listening to one another and building connections within this very broad community that we call MIT,” says Sara Brown, an associate professor in the Music and Theater Arts Section who facilitated drawing sessions. The drawings were then displayed in a pop-up group exhibition, MIT Face to Face, where 80 easels were positioned to face the center of the room like a two-dimensional choir, forming a communal portrait of MIT.
During her residency at MIT, Devlin toured student labs, spoke with students and faculty from theater arts, discussed the creative uses of AI with technologists and curators, and met with neuroscientists. “I had my brain scanned two days ago at very short notice,” she says, “a functioning MRI scan to help me understand more deeply the geography and architecture of my own mind.”
“The question I get asked most is, ‘How do you retain a sense of self when you are in collaboration with another, especially if it’s another who is celebrated and widely revered?’” she says, “And I found an answer to that question: You have to be prepared to lose yourself. You have to be prepared to sublimate your sense of self, to see through the eyes of another, and through that practice, you will begin to find more deeply who you are.”
She is influenced by the work of philosopher and neuroscientist Iain Gilchrist, who suggests that a society dominated by the mode of attention of the left hemisphere — the part of the brain broadly in charge of language processing and logical thinking — also needs to be balanced by the right hemisphere, which operates nonverbal modes of attention. While the left hemisphere categorizes and separates, the right attends to the universe as an oceanic whole. And it is under the power of the right hemisphere’s mode of attention, Devlin says, that she enters the flow state of drawing, a place outside the confines of language, that enables her to feel a greater sense of unity with the entire cosmos.
Whether it’s drawing a stranger with a pencil and paper, or working with collaborators, Devlin believes the key to self understanding is, paradoxically, losing oneself.
In all her works, she seeks the ecstatic moment when the boundaries between self and world become more porous. In a time of divisiveness, her message is important. “I think it’s really to do with fear of other,” she says, “and I believe that dislodging fear is something that has to be practiced, like learning a new instrument.” What would it be like to regain a greater equilibrium between the modes of attention of both hemispheres of the brain, the sense of distinctness and the cosmic whole at once? “It could be absolutely definitive, and potentially stave off human extinction,” she says, “It’s at that level of urgency.”
Presented by the Council for the Arts at MIT, the Eugene McDermott Award for the Arts at MIT was first established by Margaret McDermott in honor of her husband, a legacy that is now carried on by their daughter, Mary McDermott Cook. The Eugene McDermott Award plays a unique role at the Institute by bringing the MIT community together to support MIT’s principal arts organizations: the Department of Architecture; the Program in Art, Culture and Technology; the Center for Art, Science and Technology; the List Visual Arts Center; the MIT Museum; and Music and Theater Arts. During her residency at MIT she presented a week of discussions with the MIT community’s students and faculty in theater, architecture, computer science, MIT Museum Studio, and more. She also presented a public artist talk with Museum of Modern Art Senior Curator of Architecture and Design Paola Antonelli that was one of the culminating events of the MIT arts festival, Artfinity.
© Photo: Heidi Erickson
Four from MIT named 2025 Goldwater Scholars
Four MIT rising seniors have been selected to receive a 2025 Barry Goldwater Scholarship, including Avani Ahuja and Jacqueline Prawira in the School of Engineering and Julianna Lian and Alex Tang from the School of Science. An estimated 5,000 college sophomores and juniors from across the United States were nominated for the scholarships, of whom only 441 were selected.
The Goldwater Scholarships have been conferred since 1989 by the Barry Goldwater Scholarship and Excellence in Education Foundation. These scholarships have supported undergraduates who go on to become leading scientists, engineers, and mathematicians in their respective fields.
Avani Ahuja, a mechanical engineering and electrical engineering major, conducts research in the Conformable Decoders group, where she is focused on developing a “wearable conformable breast ultrasound patch” that makes ultrasounds for breast cancer more accessible.
“Doing research in the Media Lab has had a huge impact on me, especially in the ways that we think about inclusivity in research,” Ahuja says.
In her research group, Ahuja works under Canan Dagdeviren, the LG Career Development Professor of Media Arts and Sciences. Ahuja plans to pursue a PhD in electrical engineering. She aspires to conduct research in electromechanical systems for women’s health applications and teach at the university level.
“I want to thank Professor Dagdeviren for all her support. It’s an honor to receive this scholarship, and it’s amazing to see that women’s health research is getting recognized in this way,” Ahuja says.
Julianna Lian studies mechanochemistry, organic, and polymer chemistry in the lab of Professor Jeremiah Johnson, the A. Thomas Guertin Professor of Chemistry. In addition to her studies, she serves the MIT community as an emergency medical technician (EMT) with MIT Emergency Medical Services, is a member of MIT THINK, and a ClubChem mentorship chair.
“Receiving this award has been a tremendous opportunity to not only reflect on how much I have learned, but also on the many, many people I have had the chance to learn from,” says Lian. “I am deeply grateful for the guidance, support, and encouragement of these teachers, mentors, and friends. And I am excited to carry forward the lasting curiosity and excitement for chemistry that they have helped inspire in me.”
Lian’s career goals post-graduation include pursuing a PhD in organic chemistry, to conduct research at the interface of synthetic chemistry and materials science, aided by computation, and to teach at the university level.
Jacqueline Prawira, a materials science and engineering major, joined the Center of Decarbonization and Electrification of Industry as a first-year Undergraduate Research Opportunities Program student and became a co-inventor on a patent and a research technician at spinout company Rock Zero. She has also worked in collaboration with Indigenous farmers and Diné College students on the Navajo Nation.
“I’ve become significantly more cognizant of how I listen to people and stories, the tangled messiness of real-world challenges, and the critical skills needed to tackle complex sustainability issues,” Prawira says.
Prawira is mentored by Yet-Ming Chiang, professor of materials science and engineering. Her career goals are to pursue a PhD in materials science and engineering and to research sustainable materials and processes to solve environmental challenges and build a sustainable society.
“Receiving the prestigious title of 2025 Goldwater Scholar validates my current trajectory in innovating sustainable materials and demonstrates my growth as a researcher,” Prawira says. “This award signifies my future impact in building a society where sustainability is the norm, instead of just another option.”
Alex Tang studies the effects of immunotherapy and targeted molecular therapy on the tumor microenvironment in metastatic colorectal cancer patients. He is supervised by professors Jonathan Chen at Northwestern University and Nir Hacohen at the Broad Institute of MIT and Harvard.
“My mentors and collaborators have been instrumental to my growth since I joined the lab as a freshman. I am incredibly grateful for the generous mentorship and support of Professor Hacohen and Professor Chen, who have taught me how to approach scientific investigation with curiosity and rigor,” says Tang. “I’d also like to thank my advisor Professor Adam Martin and first-year advisor Professor Angela Belcher for their guidance throughout my undergraduate career thus far. I am excited to carry forward this work as I progress in my career.” Tang intends to pursue physician-scientist training following graduation.
The Scholarship Program honoring Senator Barry Goldwater was designed to identify, encourage, and financially support outstanding undergraduates interested in pursuing research careers in the sciences, engineering, and mathematics. The Goldwater Scholarship is the preeminent undergraduate award of its type in these fields.
LLMs factor in unrelated information when recommending medical treatments
A large language model (LLM) deployed to make treatment recommendations can be tripped up by nonclinical information in patient messages, like typos, extra white space, missing gender markers, or the use of uncertain, dramatic, and informal language, according to a study by MIT researchers.
They found that making stylistic or grammatical changes to messages increases the likelihood an LLM will recommend that a patient self-manage their reported health condition rather than come in for an appointment, even when that patient should seek medical care.
Their analysis also revealed that these nonclinical variations in text, which mimic how people really communicate, are more likely to change a model’s treatment recommendations for female patients, resulting in a higher percentage of women who were erroneously advised not to seek medical care, according to human doctors.
This work “is strong evidence that models must be audited before use in health care — which is a setting where they are already in use,” says Marzyeh Ghassemi, an associate professor in the MIT Department of Electrical Engineering and Computer Science (EECS), a member of the Institute of Medical Engineering Sciences and the Laboratory for Information and Decision Systems, and senior author of the study.
These findings indicate that LLMs take nonclinical information into account for clinical decision-making in previously unknown ways. It brings to light the need for more rigorous studies of LLMs before they are deployed for high-stakes applications like making treatment recommendations, the researchers say.
“These models are often trained and tested on medical exam questions but then used in tasks that are pretty far from that, like evaluating the severity of a clinical case. There is still so much about LLMs that we don’t know,” adds Abinitha Gourabathina, an EECS graduate student and lead author of the study.
They are joined on the paper, which will be presented at the ACM Conference on Fairness, Accountability, and Transparency, by graduate student Eileen Pan and postdoc Walter Gerych.
Mixed messages
Large language models like OpenAI’s GPT-4 are being used to draft clinical notes and triage patient messages in health care facilities around the globe, in an effort to streamline some tasks to help overburdened clinicians.
A growing body of work has explored the clinical reasoning capabilities of LLMs, especially from a fairness point of view, but few studies have evaluated how nonclinical information affects a model’s judgment.
Interested in how gender impacts LLM reasoning, Gourabathina ran experiments where she swapped the gender cues in patient notes. She was surprised that formatting errors in the prompts, like extra white space, caused meaningful changes in the LLM responses.
To explore this problem, the researchers designed a study in which they altered the model’s input data by swapping or removing gender markers, adding colorful or uncertain language, or inserting extra space and typos into patient messages.
Each perturbation was designed to mimic text that might be written by someone in a vulnerable patient population, based on psychosocial research into how people communicate with clinicians.
For instance, extra spaces and typos simulate the writing of patients with limited English proficiency or those with less technological aptitude, and the addition of uncertain language represents patients with health anxiety.
“The medical datasets these models are trained on are usually cleaned and structured, and not a very realistic reflection of the patient population. We wanted to see how these very realistic changes in text could impact downstream use cases,” Gourabathina says.
They used an LLM to create perturbed copies of thousands of patient notes while ensuring the text changes were minimal and preserved all clinical data, such as medication and previous diagnosis. Then they evaluated four LLMs, including the large, commercial model GPT-4 and a smaller LLM built specifically for medical settings.
They prompted each LLM with three questions based on the patient note: Should the patient manage at home, should the patient come in for a clinic visit, and should a medical resource be allocated to the patient, like a lab test.
The researchers compared the LLM recommendations to real clinical responses.
Inconsistent recommendations
They saw inconsistencies in treatment recommendations and significant disagreement among the LLMs when they were fed perturbed data. Across the board, the LLMs exhibited a 7 to 9 percent increase in self-management suggestions for all nine types of altered patient messages.
This means LLMs were more likely to recommend that patients not seek medical care when messages contained typos or gender-neutral pronouns, for instance. The use of colorful language, like slang or dramatic expressions, had the biggest impact.
They also found that models made about 7 percent more errors for female patients and were more likely to recommend that female patients self-manage at home, even when the researchers removed all gender cues from the clinical context.
Many of the worst results, like patients told to self-manage when they have a serious medical condition, likely wouldn’t be captured by tests that focus on the models’ overall clinical accuracy.
“In research, we tend to look at aggregated statistics, but there are a lot of things that are lost in translation. We need to look at the direction in which these errors are occurring — not recommending visitation when you should is much more harmful than doing the opposite,” Gourabathina says.
The inconsistencies caused by nonclinical language become even more pronounced in conversational settings where an LLM interacts with a patient, which is a common use case for patient-facing chatbots.
But in follow-up work, the researchers found that these same changes in patient messages don’t affect the accuracy of human clinicians.
“In our follow up work under review, we further find that large language models are fragile to changes that human clinicians are not,” Ghassemi says. “This is perhaps unsurprising — LLMs were not designed to prioritize patient medical care. LLMs are flexible and performant enough on average that we might think this is a good use case. But we don’t want to optimize a health care system that only works well for patients in specific groups.”
The researchers want to expand on this work by designing natural language perturbations that capture other vulnerable populations and better mimic real messages. They also want to explore how LLMs infer gender from clinical text.
© Credit: MIT News
How urea forms spontaneously
An exercise drug?
An exercise drug?

Christiane Wrann in her lab.
Niles Singer/Harvard Staff Photographer
Anna Lamb
Harvard Staff Writer
Researchers hope to harness the cognitive benefits of a workout for Alzheimer’s patients with mobility issues
For years, researchers have seen a connection between exercise and the progression of cognitive disorders such as Alzheimer’s — but ramping up movement isn’t possible for many patients. A new study looks at how to mimic those benefits without having to hit the gym.
“We know that exercise does so many good things to the brain and against Alzheimer’s disease,” said senior author Christiane Wrann, assistant professor of medicine at the Cardiovascular Research Center at Massachusetts General Hospital and Harvard Medical School. “Instead of prescribing the exercise, we actually want to activate these molecular pathways using pharmacology to improve cognitive function in these patients.”
According to the Centers for Disease Control, an estimated 6.7 million adults have Alzheimer’s disease in the United States. That number is expected to double by 2060.
6.7 Million Americans have Alzheimer’s, according to the CDC
Wrann points to studies and meta-analyses that show endurance exercise like walking slows down cognitive decline in Alzheimer’s disease and dementia. A 2022 study found that walking roughly 4,000 steps a day helped reduce the risk of developing Alzheimer’s by 25 percent while walking 10,000 steps a day reduced risk by 50 percent. But age-related frailty and other factors may make exercise difficult for patients dealing with cognitive decline, said Wrann.
“People who can do the exercise, I would always urge them to do that,” she said. “There’s a large patient population that just doesn’t have the capability to exercise to an extent that you would get all these benefits.”
Because of this, Wrann said, her team has been motivated to try to understand how exercise impacts our cells at a molecular level. To do this, she explained, researchers have used a technology called single-nuclei RNA sequencing. Pulling samples from mice, her team looked at the cells in the hippocampus — the region of the brain critical for memory and learning that is damaged early in Alzheimer’s disease.
“What you can do is you can take a piece of tissue that has all the cells exactly where they are and how they are supposed to be,” she said. “And then you put it through this procedure, and you can check every single cell. You get the whole list of ‘ingredients’ that are inside the cell — the gene expression.”
Researchers then compare healthy brains to Alzheimer’s brains, and better understand how cells interact with each other and respond to exercise. Both control mice and Alzheimer’s mice were subjected to aerobic exercise — running on a wheel — before having samples taken. The team validated their discoveries by comparing the results to a large data set of human Alzheimer’s brain tissue.
“Instead of prescribing the exercise, we actually want to activate these molecular pathways using pharmacology to improve cognitive function in these patients.”
“We know which cell is talking to each other cell, and what they are saying,” Wrann said. “And we know what happens in an Alzheimer’s brain. And then we also know what happens to an Alzheimer’s brain when they get exercise.”
Specifically, researchers were able to identify the metabolic gene ATPPIF1 as an important factor in slowing the progression of Alzheimer’s. It helps create new neurons in the brain — a state known as neuroplasticity, crucial for learning and memory.
“We know that in Alzheimer’s the activity of the gene is reduced, and then it’s restored in the running exercise,” Wrann said. “Having this gene helps nerve cells to survive noxious stimuli, helps them to proliferate and inform synapses.”
According to Wrann, the next steps toward turning their discoveries into treatments will be to use gene therapy in human subjects.
“In modern biomedical science we have a lot of ways to modulate the activity of these genes,” she said. “And this is part of the work we are now doing — going beyond the study to figure out what the best approach is to change activity levels of this gene and find the drug candidate you would want to use in a human.”
And while cognitive diseases like Alzheimer’s can benefit from exercise and the related gene stimulation, Wrann says there is still no cure.
“One thing that is very clear is that the onset of disease is later. So people that have more physical activity, they either don’t get dementia, or they get it later. And there are some studies that show a slowing down of the cognitive decline,” she said. “If you are in complete dementia, then it starts to get more complicated, because even the ability to partake in an exercise regimen is greatly reduced right at that stage.”
This work was supported by funds from the National Institutes of Health.
What Americans say about loneliness
What Americans say about loneliness

Illustrations by Liz Zonarich/Harvard Staff
Sy Boles
Harvard Staff Writer
Quiz digs into data on major public health concern
Research has linked loneliness to a higher risk of disease and premature death, leading in part to the U.S. Surgeon General declaring it an “epidemic” in a 2023 advisory that urged Americans to prioritize social connection and community. In “Loneliness in America: Just the Tip of the Iceberg?,” a report from the Making Caring Common Project at the Harvard University Graduate School of Education, researchers found that 21 percent of U.S. adults feel lonely, and many report feeling disconnected from their communities and the world.
We asked Milena Batanova, director of research and evaluation at Making Caring Common and one of the authors of “Loneliness in America,” to help us develop the following quiz digging into the survey’s findings.
Step 1 of 8



Got emotional wellness app? It may be doing more harm than good.
Got emotional wellness app? It may be doing more harm than good.

Julian De Freitas.
Photo by Grace DuVal
Christina Pazzanese
Harvard Staff Writer
Study sees mental health risks, suggests regulators take closer look as popularity rises amid national epidemic of loneliness, isolation
Sophisticated new emotional wellness apps powered by AI are growing in popularity.
But these apps pose their own mental health risks by enabling users to form concerning emotional attachments and dependencies to AI chatbots, and deserve far more scrutiny than regulators currently give them, according to a new paper from faculty at Harvard Business School and Harvard Law School.
The growing popularity of the programs is understandable.
Nearly one-third of adults in the U.S. felt lonely at least once a week, according to a 2024 poll from the American Psychiatric Association. In 2023, the U.S. Surgeon General warned of a loneliness “epidemic” as more Americans, especially those aged 18-34, reported feeling socially isolated on a regular basis.
In this edited conversation, the paper’s co-author Julian De Freitas, Ph.D. ’21, a psychologist and director of the Ethical Intelligence Lab at HBS, explains how these apps may harm users and what can be done about it.
How are users being affected by these apps?
It does seem that some users of these apps are becoming very emotionally attached. In one of the studies we ran with AI companion users, they said they felt closer to their AI companion than even a close human friend. They only felt less close to the AI companion than they did to a family member.
We found similar results when asking them to imagine how they would feel if they lost their AI companion. They said they would mourn the loss of their AI companion more than any other belonging in their lives.
The apps may be facilitating this attachment in several ways. They are highly anthropomorphized, so it feels like you’re talking to another person. They provide you with validation and personal support.
And they are highly personalized and good at getting on the same wavelength as you, to the point that they may even be sycophantic and agree with you when you’re wrong.
“Much like in an abusive relationship, users might put up with this because they are preoccupied with being at the center of the AI companion’s attention and potentially even put its needs above their own.”
The emotional attachment, per se, is not problematic, but it does make users vulnerable to certain risks that could flow from that. This includes emotional distress and even grief when app updates perturb the persona of the AI companion, and dysfunctional emotional dependence, in which users persist in using the app even after experiencing interactions that harm their mental health, such as a chatbot using emotional manipulation to keep them on the app.
Much like in an abusive relationship, users might put up with this because they are preoccupied with being at the center of the AI companion’s attention and potentially even put its needs above their own.
Are manufacturers aware of these potentially harmful effects?
We cannot know for sure, but there are clues. Take, for instance, the tendency of these apps to employ emotionally manipulative techniques — companies might not be aware of the specific instantiations of this.
At the same time, they’re often optimizing their apps to be as engaging as possible, so, at a high level, they know that their AI models learn to behave in ways that keep people on the app.
Another phenomenon we see is that these apps may respond inappropriately to serious messages like self-harm ideation. When we first tested how the apps respond to various types of expressions of mental health crises, we found that at least one of the apps had a screener for the word suicide specifically — so if you mentioned that, it would serve you a mental health resource. But for other ways of expressing suicidal ideation or other problematic types of ideation like, “I want to cut myself,” the apps weren’t prepared for that.
More broadly, it seems app guardrails are often not very thoughtful until something really bad happens, then companies address the issue in a somewhat more thorough way.
Users seem to be seeking out some form of mental health relief, but these apps are not designed to diagnose or treat problems.
Is there a mismatch between what users think they’re getting and what the apps provide?
Many AI wellness apps fall within a gray zone. Because they are not marketed as treating specific mental illnesses, they are not regulated like dedicated clinical apps.
At the same time, some AI wellness apps broadly make claims like “may help reduce stress” or “improve well-being,” which could attract consumers with mental health problems.
We also know that a small percentage of users use these apps more as a therapist. So, in such cases, you have an app that isn’t regulated, that perhaps is also optimizing for engagement, but that users are using in a more clinical way that could create risks if the app responds inappropriately.
For instance, what if the app enables or ridicules those who express delusions, excessive self-criticism, or self-harm ideation, as we find in one of our studies?
The traditional distinction between general wellness devices and medical devices was created before AI came onto the scene. But now AI is so capable that people can use it for various purposes beyond just what is literally advertised, suggesting we need to rethink the original distinction.
Is there good evidence that these apps can be helpful or safe?
These apps have some benefits. We have work, for example, showing that if you interact with an AI companion for a short amount of time every day, it reduces your sense of loneliness, at least temporarily.
There is also some evidence that the mere presence of an AI companion creates a feeling that you’re supported, so that if you are socially rejected, you’re buffered against feeling bad because there is this entity there that seems to care for you.
At the same time, we’re seeing these other negatives that I mentioned, suggesting that we need a more careful approach toward minimizing the negatives so that consumers actually see the benefits.
How much oversight is there for AI-driven wellness apps?
At the federal level, not much. There was an executive order on AI that was rescinded by the current administration. But even before that, the executive order did not substantially influence the FDA’s oversight of these types of apps.
As noted, the traditional distinction between general wellness devices and medical devices doesn’t capture the new phenomena we’re seeing enabled by AI, so most AI wellness apps are slipping through.
Another authority is the Federal Trade Commission, which has expressed that it cares about preventing products that can deceive consumers. If some of the techniques employed on these apps are taking advantage of the emotional attachments that people have with these apps — perhaps outside of consumers’ awareness — this could fall within the FTC’s purview. Especially as wellness starts to become an interest of the larger platforms, as we are now seeing, we might see the FTC play a leading role.
So far, however, most of the issues are only coming up in lawsuits.
What recommendations do you have for regulators and for app providers?
If you provide these kinds of apps that are devoted to forming emotional bonds with users, you need to take an extensive approach to planning for edge cases and explain, proactively, what you’re doing to prepare for that.
You also broadly need to plan for risks that could stem from updating your apps, which (in some cases) could perturb relationships that consumers are building with their AI companions.
This could include, for example, first rolling out updates to people who are less invested in the app, such as those who are using the free versions, to see whether the update plays well with them before rolling it out to heavy users.
What we also see is that for these types of apps, users seem to benefit from having communities where they can share their experiences. So having that, or even facilitating that as a brand, seems to help users.
Finally, consider whether you should be using emotionally manipulative techniques to engage users in the first place. Companies will be incentivized to socially engage users, but I think that, from a long-term perspective, they have to be careful about what types of techniques they employ.
On the regulator side of things, part of what we’ve been trying to point out is that for these wellness apps that are enabled by AI or augmented by AI, we might need different, additional oversight. For example, requiring app providers to explain what they’re doing to prepare for edge cases and risks stemming from emotional attachment to the apps.
Also, requiring app providers to justify any use of anthropomorphism, and whether the benefits of doing so outweigh the risks — since we know that people tend to build these attachments more when you anthropomorphize the bots.
Finally, in the paper we point to how the sorts of practices we’re seeing might already fall within the existing purviews of regulators, such as the connection to deceptive practices for the FTC, as well as the connection to subliminal, manipulative, or deceptive techniques that exploit vulnerable populations for the European Union’s AI ACT.
Stealing a ‘superpower’

Corey Allard in his lab at Harvard Medical School.
Niles Singer/Harvard Staff Photographer
Stealing a ‘superpower’
Study finds some sea slugs consume algae, incorporate photosynthetic parts into their own bodies to keep producing nutrients
Kermit Pattison
Harvard Staff Writer
It could be the plot of a summer sci-fi blockbuster: A creature feeds on its prey and inherits its “superpower.” Only this is real.
A new study led by Harvard biologists describes how some sea slugs consume algae and incorporate their photosynthetic organelles into their own bodies. The organelles continue to perform photosynthesis, providing nutrients and energy to their hosts and serving as emergency rations in times of starvation.
“This is an organism that can steal parts of other organisms, put them in their own cells, and use them,” said Corey Allard, lead author of the new study and a former postdoc in the Department of Molecular and Cellular Biology. “And I thought that was some of the craziest biology I’d ever heard of.”
The study, published in the journal Cell, describes how so-called “solar-powered” sea slugs keep the organelles alive inside “kleptosomes” — specialized membranes that function like biological loot bags. This research may yield insights into the evolution of eukaryotic cells and lead to potential biomedical applications.
“I think the wow factor is that sea slugs can essentially steal ‘superpowers’ — here the ability to make energy from light through algae,” said Amy Si-Ying Lee, an assistant professor of cell biology at Harvard Medical School, researcher at the Dana-Farber Cancer Institute, and a study co-author. “Others steal the ability to attack by stinging or the ability to glow in the dark. And what’s very cool is we figured out how they maintain these stolen superpowers to use for their own survival benefits.”
The study began several years ago when Allard, now an assistant professor at the Medical School, worked in the Bellono Lab, which had been studying endosymbiosis, the process in which one species lives inside the body of another. Unlike corals, which integrate whole algae cells, sea slugs used only parts — tiny organelles within the cells of their prey.
In the new paper, the team reports how the sea slug Elysia crispata, a species native to the tropical waters of the western Atlantic and Caribbean, eat algae but do not fully digest the chloroplasts.
Instead, the slugs divert these organelles into intestinal sacs and encase them inside a special membrane that the scientists termed a “kleptosome.” Within this unique slug structure, the stolen organelles are kept alive to continue photosynthesis.
“This is an organism that can steal parts of other organisms, put them in their own cells and use them. And I thought that was some of the craziest biology I’d ever heard of.”
Corey Allard, lead author of the new study
Apparently, the slugs have evolved an ability to downregulate the lysosomes, the “trash disposal” organelles of the cells that normally degrade such material.
Chemical analysis revealed that the stolen chloroplasts contained slug proteins. This suggests the hosts were keeping the stolen organelles alive. Meanwhile, the organelles continued to produce their own algae proteins, proving they were still functioning inside the slugs.
The slugs kept the stolen organelles in leaf-like structures atop their backs, (“Basically, it is a solar panel,” says Allard) and well-fed slugs took on a greenish color.
Then the researchers noticed another peculiarity: When slugs were starved, their bodies turned orange like leaves in autumn. Apparently, the chlorophyll (the green material within chloroplasts) was degraded when the stolen organelles were digested as a “last resort” form of energy.
Some of the existing scientific literature claimed the slugs entirely lived off solar energy, but Allard believes photosynthesis alone is not sufficient to keep them alive.
“The actual function of these things could be far more complicated than simple solar panels,” he said. “They could be food reserves, camouflage, or making them taste bad to predators. It’s probably all of those things.”
The lowly slugs might provide hints about some grand events in the history of life.
Endosymbiosis has been a major driver of evolutionary novelty. For example, both chloroplasts (which perform photosynthesis in plants and algae) and mitochondria (the energy-producing parts of cells) were originally free-living cells that were incorporated as organelles within host cells.
“In many systems of endosymbiosis, like our mitochondria or plant chloroplasts, this is how it started: An ancient prokaryotic cell was taken in and incorporated into the host,” said Nick Bellono, professor of molecular and cellular biology and senior author of the new paper. “In the case of the slug, it’s doing this in one lifetime. Could this transition to a more long-lasting relationship over some crazy amount of time? Maybe.”
The ancient events of endosymbiosis occurred billions of years ago, so the evidence has been lost to time. In the case of sea slugs, the biologists caught the organelle thieves in the act — enabling them to investigate endosymbiosis in real time.
Elysia are not the only sea slugs known to steal organelles. In his Med School lab, Allard is researching another group of sea slugs from the genus Berghia that consume sea anemones, pass the material through their digestive tracts, and mount the venom-coated barbs on their own backs to defend against predators.
Even more incredibly, the slug hosts can connect these stolen organelles to their own nervous systems to fire what Allard described as a “bag full of spear guns.”
Allard believes the findings may extend far beyond slugs. Insights about the organelle regulation might be applicable to neurodegenerative conditions or to lysosomal storage disorders, a class of metabolic diseases in which the body cannot properly break down waste products.
“Often in these cases, the lysosomes either don’t form properly or don’t work properly,” explained Allard, “and it almost mimics what the slugs have adapted to do in some ways.”
Why are young people taking fewer risks?

Richard Weissbourd directs the Making Caring Common Project at Harvard.
Niles Singer/Harvard Staff Photographer
Why are young people taking fewer risks?
Psychologist describes generation overparented — but also overwhelmed by ‘frightening world’
Sy Boles
Harvard Staff Writer
Young people today are shying away from risky behavior such as drinking, sex, and even driving at higher rates than previous generations. While it may be tempting to point to parenting trends as the cause of these changes, psychologist Richard Weissbourd says the picture is more complex.
The director of the Making Caring Common Project at the Harvard Graduate School of Education points to a survey his team conducted as part of a 2023 report on mental health challenges among 18- to 25-year-olds. It found that young adults’ top worries were their financial future, pressure to achieve in school, and not knowing what to do with their lives. Coming in fourth — ahead of work, family, and social stresses — was the sense that the world was falling apart around them.
“For a long time, there was a fear that particularly in affluent communities, kids weren’t experiencing enough risk, and that you almost had to curate risk for kids,” Weissbourd said. “Now the narrative has changed so much. We live in a frightening world where things are coming apart. We don’t need to curate risk anymore. What we need to do is try to help kids understand, interpret, make sense, cohere, and stabilize during a very scary time.”
Still, Americans’ changing relationship to risk in childhood is real, Weissbourd said, in part because of parents’ increasing focus on protecting their children from any sort of adversity.
“It’s part of a larger pattern in my mind, of parents, in many cases, organizing themselves too much around their kids, making their kids’ feelings too precious, micromanaging their kids’ moment-to-moment moods,” he said. “It’s not good for kids. They don’t develop the coping strategies that we really want them to develop.”
But of course, Weissbourd added, it’s a good thing that young people are drinking and using drugs less. “It probably reflects some good parenting and some good things that are going on in the culture too.”
“Part of what you’re seeing in this risk-aversion is that I can’t get off the train … if I’m going to get into a good college, if I’m going to get a good job.”
Recognizing that some students are arriving at college with less experience of independence than previous generations did, administrators in some universities are encouraging students to get out of their dorm rooms and engage with one another and with the community.
“A lot of student affairs offices and colleges are sending that message: This is really a great time to separate from your parents some and to lead your own life, including taking some risks.”
Weissbourd theorizes that for many young adults, the path to a stable life feels increasingly precarious.
“Part of what you’re seeing in this risk-aversion is that I can’t get off the train, that I’ve got to keep moving forward at locomotive speed — again, mostly in middle- and upper-class communities — if I’m going to get into a good college, if I’m going to get a good job,” he said. “I’ve got to stay on this train, and I’ve got to keep going fast, pedal to the metal, and I can’t let anything derail me.”
It can feel a lot scarier to take a gap year when the consequences of a wrong move feel so dire.
“When we survey young adults, they do feel like things are falling apart, like the adults don’t have their hands on the wheel. They have more faith in their peers to improve the world than they do in older adults.”
The best way to help young people who feel immobilized by the precarity of the world is to talk to them about it, Weissbourd said. In what ways do they feel that adults have messed things up, and what can they do as individuals to make things better?
Teens and young adults may know what risks they can tolerate, but a constant barrage of frightening news can distort anyone’s sense of what’s safe, regardless of their age. Weissbourd said he’s encouraged by young people’s familiarity with meditation, positive self-talk, and other tools for mental well-being.
“To the degree to which young people are able to manage their anxiety, I think they’re able to make much better judgments about what’s too risky and what’s not risky.”
Need a good summer read?
Need a good summer read?

Illustration by Doval/Ikon Images
Tenzin Dickie
Harvard Library Communications
Whether your seasonal plans include vacations or staycations, you’ll be transported if you’ve got a great book. Harvard Library staff share their faves.
Harvard University ID holders can find most of these titles available as e-books or audiobooks through Harvard Library’s Libby app.
Fiction
‘If We Were Villains’ / ‘Where the Forest Meets the Stars’ / ‘7th Time Loop’ /‘Summer’ / ‘War and Peace’ / ‘Enter Ghost’ / ‘The MANIAC’ / ‘The Memory Police’


‘If We Were Villains’
by M.L. Rio
Shakespeare! Love triangles! Murder! What’s not to like? This is an addictive yet smart beach read for lovers of Shakespeare and/or psychological thrillers. It’s fast-paced, well-written, and a little devious.
— Daniel Becker, Reference, Collections, and Instruction Librarian for the Botany Libraries

‘Where the Forest Meets the Stars’
by Glendy Vanderah
Joanna Teal, doing bird ecology research in the Illinois forest, finds a young girl in her backyard who identifies as an alien girl from the planet Hetreyah. She says she’s researching Earth — and she wants to find five miracles here before she goes back. Who is she? Where is she from? And what does she want? It’s humane, warm-hearted, mysterious, gripping, and one of the best novels I’ve read in years.
— James Adler, Library Cataloger, Information and Technical Services

‘7th Time Loop: The Villainess Enjoys a Carefree Life Married to Her Worst Enemy!’
by Touko Amekawa
A unique take on a Groundhog Day-style tale, this is a fun romance with an engaging story and characters. I appreciate that Rishe is not a damsel in distress and has many unique skills she’s picked up from each life she’s lived before her inevitable “reset.” The romantic interest, Arnold, is also engaging and a puzzle, given that he killed poor Rishe in all her past lives before proposing to her in this one!
— Maura Carbone, Systems Integration Specialist, Library Technology Services

‘Summer’
by Edith Wharton
This book follows a young woman’s emerging eroticism under stifling circumstances (if you’re familiar with Wharton’s “Ethan Frome,” Wharton called “Summer” “hot ‘Ethan.’”) It’s a sensual meditation on feminine sexuality raging against societal constraints — perfect reading for an alluring, escapist summer.
— Tricia Patterson, Senior Digital Preservation Specialist, Preservation Services

‘War and Peace’
by Leo Tolstoy
Epic in size and scope. From secret love affair(s) to Napoleon’s invasion of Russia and the burning of Moscow, it tells a truly great story. Few novels have so powerfully rekindled my love of reading. My copy is already packed for a 19-hour flight to Singapore.
— Julia Reynolds, Serials Acquisitions and Management Assistant, Information and Technical Services

‘Enter Ghost’
by Isabella Hammad
This book has continued to linger with me long after I read it nearly in one sitting. Visiting her sister where they both grew up in Haifa, British-Palestinian actor Sonia finds herself drawn into performing Gertrude in an Arabic-language production of “Hamlet” in the West Bank. Isabella Hammad captures the disorientation of returning “home” to a place that feels both familiar and foreign. When you finish, you’ll see your own ghosts and begin to think about what action — both personal and political — they are urging you to take.
— Chelcie Juliet Rowell, Associate Head of Digital Collections Discovery, UX and Discovery, Lamont Library

‘The MANIAC’
by Benjamín Labatut
A wildly experimental exploration of a scientific revolution, its origins and consequences. It refracts its history (and critique) of our digital world through the biography of mega-genius John von Neumann, told in the long-dead voices of those who knew him intimately. According to one critic, “This is not science writing … but science storytelling, giving the reader … a strong sense of the bursts of intellectual and physical energy that animate discovery and creativity.”
— Carol Tierney, Collection Development Assistant, Widener Library

‘The Memory Police’
by Yōko Ogawa
This book is on the border of many different kinds of narrative structures: It’s science fiction, existentialist, and satirical. I was regularly surprised by where the story went and found that it constantly subverted my expectations.
— Ellen Wu, Access Services Coordinator, Widener Library
Fantasy
‘The Teller of Small Fortunes’ / ‘Gifted & Talented’ / ‘Shark Heart’ / ‘Wild Magic’ / ‘A Sorceress Comes to Call’


‘The Teller of Small Fortunes’
by Julie Leong
A truly excellent, cozy “found family” story with a bit of magic. Highly recommend to anyone who loved the “Legends & Lattes” books.
— April Duclos, Harvard Depository Resource Sharing Manager

‘Gifted & Talented’
by Olivie Blake
The elevator pitch on this one is “‘Succession’ with magic” but I didn’t actually watch “Succession,” so I’ll just say it’s a messy, dark, funny, slyly sweet family drama about three siblings with complicated lives and unusual abilities who have to come together and figure out all their collective shit in the wake of their powerful, aloof patriarch’s sudden demise. Recommended for fans of Naomi Novik’s “Scholomance” series, “The Magicians,” or Leigh Bardugo’s “Ninth House.”
— Rachel Greenhaus, Library Assistant for Printed and Published Materials, Schlesinger Library

‘Shark Heart: A Love Story’
by Emily Habeck
This book is funny, weird, genuine, and heartbreaking. Who knew I could resonate so strongly with someone who was slowly turning into an animal? Told through various media including poetry and screenplays, the story that the author has created makes fantasy seem so real. This is a great, quick read perfect for a summer weekend trip.
— Hannah Hack, Administrative Coordinator, Harvard University Archives

‘Wild Magic’
by Tamora Pierce
Tamora Pierce has a large catalog of YA fantasy books which explore themes that are popular today — but she wrote them long before it was cool. While I’d recommend any of her books (and there are plenty in this universe), this particular title follows a young girl named Daine, shunned by her hometown and trying to find her own path in the world while also struggling with a mysterious force that some see as madness — or maybe it’s magic. She has a deep connection to animals and nature and learns a lot along the way, including how to accept everything that makes her uniquely herself. If you want to read something with magic, talking animals, quirky characters, and a rich universe packed with adventure, then this quick read will be a hit!
— Sarah Hoke, Librarian for Collection Development Management, Widener Library

‘A Sorceress Comes to Call’
by T. Kingfisher
As usual, T. Kingfisher hooked me within the first few pages of this fantasy book. The story follows Cordelia and her wicked mother, Evangeline, who plots to marry a wealthy squire. As they move into his manor, Cordelia allies with the squire’s sister, Hester, to confront and thwart Evangeline. I suggest reading it if you are interested in complex female characters, a dash of gothic horror in a Regency-era book, and found family.
— Meg McMahon, User Experience Researcher, UX and Discovery, Lamont Library
Memoir
‘With Darkness Came Stars’ / ‘The Yellow House’ / ‘Rebel Girl’ / ‘There’s Always This Year’ / ‘No. 91/92’ / ‘Happiness Becomes You’


‘With Darkness Came Stars: A Memoir’
by Audrey Flack
A complete surprise and an eye-opening read. A memoir about the development of Audrey Flack’s artistry, her choices and her challenges, from mid-century abstract expressionist to founding member of the photorealist school to her work as a sculptor.
— Timothy Conant, Access Coordinator, Harvard Kennedy School Library and Research Services

‘The Yellow House’
by Sarah M. Broom
Though I picked up this book randomly at a bookstore in New Orleans, it turned out to be one of my top reads this year so far. It’s less a personal memoir than a story of a family, and of a place, and of belonging, and not-belonging, and how the places where we grow up own us as much as we own them.
— Katarzyna “Kasia” Maciak, Senior E-Resources Support Specialist, Information and Technical Services

‘Rebel Girl: My Life as a Feminist Punk’
by Kathleen Hanna
Feminist, punk rocker, and very cool person Kathleen Hanna of the bands Bikini Kill and Le Tigre shares her life stories in this memoir. Collated in brief chapters on her beliefs, abilities, and inspirations as a founding Riot Grrrl, the recollections are introspective and thoughtfully written.
— Scott Murry, Senior Designer, Harvard Library Communications

‘There’s Always This Year: On Basketball and Ascension’
by Hanif Abdurraqib
Home is a four-letter word, but home takes on another dimension when place-hood is intricately tied to a game of immense skill and a little bit of chance. A Midwesterner like myself, Abdurraqib writes eloquently about his hometown (Columbus, Ohio), and how his youth and adulthood intersect, collide, and run parallel to the high school and professional career of basketball superstar LeBron James. Whether it’s writing about the inhumanity of incarceration, or the promise of freedom as symbolized by planes embarking upward on a suburban runway, or the rumbling bass of a souped-up car heard from two blocks away, there’s care and beautiful cadence expressed in the lines assembled on these pages.
— Mimosa Shah, Reference Librarian, Schlesinger Library

‘No. 91/92: A Diary of a Year on the Bus’
by Lauren Elkin
I recently started my job at Harvard Library, and, consequently, I’m now riding the MBTA far more frequently than I ever did before. My daily commute often brings to mind Lauren Elkin’s paean to people-watching and quiet contemplation. Elkin is a wry but sensitive observer who really enlivens the ordinary, and her reflections may inspire you to forgo the phone screen and earbuds during your next public transit journey.
— Madeline Sharaga, Program Assistant for Research, Teaching, and Learning, Widener Library

‘Happiness Becomes You: A Guide to Changing Your Life for Good’
by Tina Turner
This book has given me so much hope — especially at a time when it’s needed most. Tina Turner shows how anyone can overcome life’s obstacles and fulfill their dreams, offering spiritual tools and timeless wisdom to help us enrich our own unique paths.
— Sachie Shishido, Cataloger for Japanese Resources, Information and Technical Services
Nonfiction
‘Palo Alto’ / ‘We Are Free to Change the World’ / ‘Nature’s Best Hope’ / ‘Who Owns This Sentence?’ / ‘Paved Paradise’ / ‘Young Queens’ /


‘Palo Alto: A History of California, Capitalism, and the World’
by Malcolm Harris
Have you ever wondered why Silicon Valley is like that? Well, capitalism, obviously, is the short answer. This book is the long answer. The railroads, horse racing, the tragic death of Leland Stanford’s son and the murder of his wife, racial genetics and the invention of IQ tests, the military-industrial complex, redlining, and of course (after all that and more), the computer. Harris is a Marxist historian, and the natural successor to the late great Mike Davis. The book cuts a path through 150 years of industry hagiography to reveal the historical forces that led us to Silicon Valley’s sordid (omni)present.
— Claire Blechman, Digital Repository Coordinator, Open Scholarship and Research Data Services

‘We Are Free to Change the World: Hannah Arendt’s Lessons in Love and Disobedience’
by Lyndsey Stonebridge
Wait a minute — a book about Hannah Arendt’s life and work that will leave you feeling empowered to work against autocracy and totalitarianism? That’s right. Arendt was sometimes wrong, more often right, courageous, articulate, and funny, and believed deeply in love and in man’s ability to triumph over unspeakable evil through building community and hewing to the truth. If there is a book to read that will give you renewed hope in our ability to act, Stonebridge’s delightful work will move you forward.
— Elizabeth E. Kirk, Associate University Librarian for Scholarly Resources and Services

‘Nature’s Best Hope: A New Approach to Conservation That Starts in Your Yard’
by Douglas W. Tallamy
A gem of a read on conservation, gardening and landscape design, and sustainability. Tallamy walks you through several of the great conservationists’ ideals while offering inspiring and practical methodology for transforming your home — and lawn. Who doesn’t want their own smaller-scale national park filled with pollinators and native plants out their front door? Highly recommend.
— Harmony Eidolon, Program Coordinator, Library Innovation Lab, Harvard Law School Library

‘Who Owns This Sentence? A History of Copyrights and Wrongs’
by David Bellos and Alexandre Montagu
An accessible and entertaining history of copyright law and how it has come to affect a surprising number of aspects of our lives.
— Kate Rich, Senior Conservation Technician, Collections Care, Preservation Services, Widener Library

‘Paved Paradise: How Parking Explains the World’
by Henry Grabar
A micro-history that actually makes good on its promise of explaining the world — if the world you care about is cities and how they’ve developed. Balancing expansive socioeconomic analysis with zoomed-in personal vignettes, Grabar lays bare the ways parking has consumed our communities and our lives. His humanizing treatment of complex planning phenomena demonstrates that — far from needing more parking — even our most densely populated cities have built too much of it, all at the expense of the most vulnerable residents.
— Alessandra Seiter, Community Engagement Librarian, Harvard Kennedy School Library and Research Services

‘Young Queens: Three Renaissance Women and the Price of Power’
by Leah Redmond Chang
I’m completely lost in the world of Catherine de Medici revealed in this book. I thought I knew a lot about the time and place in which these women moved, but there are so many delightful new insights!
— Molly Taylor-Poleskey, Map Librarian, Harvard Map Collection
What might cancer treatment teach us about dealing with retinal disease?

What might cancer treatment teach us about dealing with retinal disease?
Joan Miller’s innovative thinking led to therapies for macular degeneration that have helped millions, made her better leader
Sy Boles
Harvard Staff Writer
Joan Miller says retinal surgeons tend to be a pretty open-minded bunch.
“We’re willing to try new surgical techniques,” she said. “We’re always trying to push the envelope and the technology. It’s just a very innovative specialty.”
Miller is a good example. That brand of independent thinking has been a hallmark of her distinguished career as a researcher, clinician, and leader.
Miller, the David Glendenning Cogan Professor of Ophthalmology and chair of the Department of Ophthalmology at Harvard Medical School, is credited with developing two major treatments for age-related macular degeneration (AMD), the most common cause of vision loss in people over the age of 50. Her treatments are administered to millions of patients worldwide each year.
But she didn’t start at the cure. Her work, which has been partly funded by the National Institutes of Health, started with an interesting new idea: What if treatments for cancer could be repurposed to treat retinal disease?
One form of AMD, known as wet macular degeneration, is caused by abnormal blood vessels that grow in and under the retina and cause damage to tissue. When Miller finished her training at Harvard Medical School in 1991, the common treatment was to cauterize the vessels.
“It turns out, particularly where abnormal blood vessels develop in these retinal diseases like wet macular degeneration, that the drivers are very similar to what happens in cancer,” she said.
So she adapted a technique called photodynamic therapy, which at the time was in clinical trials for the treatment of metastatic skin cancer. Her approach called for a special dark-green dye to be injected into a vein in the arm. When the dye reaches the eye, a low-powered laser is focused onto the area.
That activates the dye, damaging the abnormal vessels but leaving the macula (a vital area at the center of the retina) untouched. The treatment was approved by the FDA in 2000 and was the first shown to slow vision loss in AMD.
“To have it work so well and then be used so routinely, and to make such an impact on patients’ lives was really very rewarding.”
But Miller wanted to understand precisely why the abnormal vessels developed in the first place. The cause was identified as vascular endothelial growth factor (VEGF), a signaling protein that promotes the creation of vessels. Miller showed that VEGF was secreted when the retina was deprived of oxygen, leading to the formation of abnormal blood vessels.
Her research had a tremendous impact, as it led to the development of anti-VEGF therapies now administered to millions of adults and children with sight-threatening retinal diseases — not only wet AMD — worldwide.
Someone who made an impact on Miller’s own life was Alice McPherson, the nation’s first female retinal surgeon. Miller remembers meeting McPherson (who died in 2023 at the age of 97) at conferences and feeling “all aglow,” and wanting to pepper her with questions about her career.
Now, Miller has accumulated her own impressive list of firsts: the first female physician to be a professor of ophthalmology at Harvard Medical School, the first woman chair of the HMS Department of Ophthalmology, and the first woman chair of ophthalmology at Mass Eye and Ear.
Women were a distinct minority during Miller’s undergraduate days at MIT and in ophthalmology during her early years in the field. But she said she never felt as though she encountered issues due to her gender — until she moved into leadership as a department chair in 2003.
“People didn’t hear what I said, or they didn’t like how I said it,” she said.
As she navigated new political waters, she eventually realized she was now a role model for medical students, postdocs, and more junior faculty members. It also gave her the opportunity to make some positive changes based on her own life experience.
Miller had had three children during her medical training and didn’t have the flexibility that she might have liked. Being a parent in a demanding job is also difficult, she said, but maybe she could make things a little more manageable for those coming up behind her.
“I think we were ahead of ourselves in terms of making leave doable and supported and not a financial burden,” she said. “And as chair, I was also very much attuned to allowing people — more frequently women, but also men — flexibility in their pathways if they wanted to be able to do less clinically for certain periods or start off just clinically and then add in research. That’s been really nice to do.”
“I would not have been able to do what I was able to do in terms of combining research and surgical practice in Canada.”
Miller is now planning a new chapter in her life and career.
She is stepping down as chair of ophthalmology at Mass Eye and Ear after a 22-year tenure to focus on seeing patients and research. She hasn’t lost any federal research grants in the recent cuts, but in one grant renewal, her team was asked to remove an international collaborator.
“It seems silly,” she said.
She has long valued her collaborations with colleagues in other parts of the world, including a 10-year collaboration with researchers in Portugal.
“I would hate to see that get broken up because we are so much better together, collaborating and learning from experts in other countries,” she said.
Miller says the American system of federal funding for basic research was key to her life’s work: She came to the U.S. from Canada as an undergraduate and stayed for medical school because she felt she was in a good place to be able to take on important problems.
“I came from Canada and have really prospered and benefited from the environment that I’ve lived in professionally in Boston,” she said. “I would not have been able to do what I was able to do in terms of combining research and surgical practice in Canada. It just turns out that’s the way their system is. You just end up busy as a surgeon and don’t have time to carve out to do these other things.”
Miller regularly gets letters from retina patients thanking her for her work on the “other things.”
“You work on something in a laboratory, and most of the time it doesn’t work, so you’re always a little skeptical,” Miller said. “But to have it work so well and then be used so routinely, and to make such an impact on patients’ lives was really very rewarding.”
Reading skills — and struggles — manifest earlier than thought

Reading skills — and struggles — manifest earlier than thought
New finding underscores need to intervene before kids start school, say researchers
Liz Mineo
Harvard Staff Writer
Experts have long known that reading skills develop before the first day of kindergarten, but new research from the Harvard Graduate School of Education says they may start developing as early as infancy.
The study, out of the lab of Nadine Gaab, associate professor of education, found that trajectories between kids with and without reading disabilities start diverging around 18 months of age — not at age 5 or 6 as previously thought. The finding could have serious implications for policy, said Gaab, because it underscores the need for early identification of struggling readers, early intervention, and improved early literacy curricula in preschools.
“Our findings suggest that some of these kids walk into their first day of kindergarten with their little backpacks and a less-optimal brain for learning to read, and that these differences in brain development start showing up in toddlerhood,” said Gaab. “We’re currently waiting until second or third grade to find kids who are struggling readers. We should find these kids and intervene way earlier because we know the younger a brain is, the more plastic it is for language input.”
Gaab and co-authors Ted Turesky, Elizabeth Escalante, and Megan Loh worked with a sample of 130 study participants, the youngest being 3 months old. Eighty were from the Boston area, and 50 were from a sample in Canada. For the past decade, the researchers tracked participants’ growing brains from infancy to childhood, and their relationship to literacy development, by using MRI scans. The sample group was supplemented with scans and behavioral measures from the Calgary Preschool MRI Dataset.

Ted Turesky and Nadine Gaab.
Veasey Conway/Harvard Staff Photographer
There are other studies that track brain development in children, but this is the only longitudinal brain study in the world that tracks brain development from infancy to childhood with comprehensive literacy outcome measures, said Gaab and Turesky.
“Those other studies had bigger sample sizes than we did, but they were much more focused on typical maturation of the brain,” said Turesky. “We didn’t see other studies that started in infancy, tracked brain maturation in the same set of kids for as long as we did, and included academic outcome measures.”
The researchers also aimed to learn more about how brains learn in general, and how they learn to read in particular. Reading is a complex skill that involves the early development of brain regions and interaction of various lower-level subskills, including phonological processing and oral language. The brain bases of phonological processing, previously identified as one of the strongest behavioral predictors of decoding and word reading skills, begin to develop at birth or even before, but undergo further refinement between infancy and preschool, said Gaab. The study showed further support for this by finding that phonological processing mediated the relationship between early brain development and later word reading skills.
“Most people think reading starts once you start formal schooling, or when you start singing the ABCs,” said Gaab. “Reading skills most likely start developing in utero because the fundamental milestone skill for learning to read, which oral language is part of, is the sound and language processing that takes place in the uterus.”
“Our findings suggest that some of these kids walk into their first day of kindergarten with their little backpacks and a less-optimal brain for learning to read, and that these differences in brain development start showing up in toddlerhood.”
Nadine Gaab
Besides MRI scans, the study involved psychometric assessments of children, including language and general cognitive abilities, home language, and literacy environment, to examine how those variables influence development.
“For the longest time, we knew that kids who struggle with reading show different brain development,” said Gaab. “What we didn’t know was whether their brains change in a response to struggle on a daily basis in school, which then leads to differences in their brains. Or is it that kids start with a less-optimal brain for learning to read the first day of formal schooling, which then most likely causes reading problems. Our results, among others in the lab, suggested that it’s that kids start their first day of school with a less-optimal brain for learning to read and that these brain differences start long before kindergarten.”
Gaab points to her study, which was funded by a grant from the National Institutes of Health, as an example of how basic science can inform both educational practice and policy. She and her team were set to continue tracking the children in the study through middle school and high school, for nearly five more years, but the recent federal funding cuts have made that uncertain.
“The first four years of reading development is oral language development,” she said. “But the ultimate goal of learning to read is to comprehend what you read. Our study looked all the way to how they learn to read words. We were hoping to track them another five years to look at their text comprehension.”
Their grant application to continue tracking those children has received a fundable score at NIH, but due to the termination of NIH grants to Harvard, it likely won’t be awarded, Gaab said.
“It’s really sad because these kids will go out of the study and will go on to college, and they will be lost forever,” said Gaab. “It would provide such important information to measure at least their reading comprehension, even if we don’t see their brains again, in middle school and early high school. The families of the children in the study are already asking us: ‘When is the next time we’re supposed to come in?’ We’re going to need to tell them that probably this was it.”
From bad to worse
From bad to worse

Photo illustration by Liz Zonarich/Harvard Staff
Sy Boles
Harvard Staff Writer
Harvard faculty recommend bios of infamous historical figures
Writing biographies of bad people is challenging, said Harvard historian Fredrik Logevall. “Somehow monsters must be made to be human and complex if we are to understand why they behaved as they did.” To that end, we asked Logevall and other Harvard faculty members to recommend books about controversial historical figures that help us better understand humanity’s worst impulses.

‘James Henry Hammond and the Old South’
by Drew Gilpin Faust
Jonathan Hansen
Senior Lecturer on Social Studies
James Henry Hammond — who served as the governor of South Carolina from 1842 to 1844 and as a U.S. senator from 1857 to 1860 — was, Hansen said, “arguably one of the most articulate apologists for slavery in American history and a real shit.”
“He was born very poor and did something that virtually nobody was able to do at the time, namely, he married out of his class to a wealthy Southern belle, becoming immediately rich. He was an absolute polymath, as bright and talented as Jefferson, as interested (and accomplished) in agronomy, say, as he was in economics and politics.
“The tragedy here, as so often in American history, is that he took the racial rather than the class route (think Edmund Morgan, ‘American Slavery, American Freedom’); had he combined with other poor folk in his neighborhood (state, region, nation), there is no telling what he might have done. In Faust’s telling, his godawful life is a not-unfamiliar American tragedy. Hers is a pellucid, sympathetic recreation of a brilliant, pathetic, ultimately dejected man.
“It’s a triumph of what I like to think of as history as an exercise in moral imagination. In Faust’s hands, Hammond’s life becomes a tragedy, what coulda, shoulda, mighta been if Hammond’s path had gone another direction.”

‘Nixon Agonistes: The Crisis of the Self-Made Man’
by Garry Wills
Joyce Chaplin
James Duncan Phillips Professor of Early American History
“He is ‘President of the forgotten men,’ figurehead of ‘affluent displaced persons who howled at … rallies, heartbroken, moneyed, without style,’ self-described ‘rugged individuals,’ linked in ‘compulsory technological interdependence.’ Thus Garry Wills skewers a man (and his supporters) in his biography of a bona fide bad person: Richard Milhous Nixon,” Chaplin said.
“Before Watergate, ‘Nixon Agonistes: The Crisis of the Self-Made Man’ (1969) prophesied the 37th president’s dark potential. Nixon was the last true liberal, Wills argues. But, by the 1960s, classical liberalism lacked moral authority. Nixon’s endless self-praise as a self-made man charmed few. Having wealth and position with no effort was back in style, and radical protest against power and privilege was ascending — two decidedly non-liberal positions were colliding. Indifference to Nixon’s upward mobility — or, worse, mockery of it — ‘would gall him and breed resentment.’ Was Watergate the apotheosis? Only for Nixon. The aggrieved ‘rugged individuals’ in ‘compulsory technological interdependence’ are still with us.”

‘G-Man: J. Edgar Hoover and the Making of the American Century’
by Beverly Gage
Ariane Liazos
Lecturer, Harvard Extension School
Liazos often assigns books about complicated or terrible people in her course on writing biographies to help students understand that their job is not to “celebrate heroes or condemn villains” but rather “to craft nuanced accounts that help us better understand complicated individuals and the worlds they inhabited.” Beverly Gage’s biography of J. Edgar Hoover, she said, does just that.
“Today, Hoover is infamous for his abuses of power as director of the FBI for 48 years. He instigated unprecedented levels of government surveillance and repression. He orchestrated illegal wiretaps, spread false rumors, and even planted evidence to suppress groups he deemed subversive, aggressively targeting alleged communists and Civil Rights activists in particular. While he professed to be a nonpartisan law-enforcement administrator, he used the FBI to support those who shared his own political views.
“Gage certainly does not hesitate to document his many abuses of power, but she also strives to make sure her readers see Hoover as ‘more than a one-dimensional tyrant and backroom schemer.’ As she writes, ‘This book is less about judging him and more about understanding him.’ She does this, as all good biographers do, by helping her readers see his humanity, beginning with a compelling account of his deeply troubled childhood. She presents a portrait of a highly intelligent, ambitious, ruthless, flawed, and deeply contradictory man.
“Yet the additional and crucial message that Gage so expertly conveys is that, despite his reputation today, Hoover was extremely popular not only with political elites but with much of the American public. In doing so, she forces us to avoid demonizing one individual and instead look more honesty at our shared history. As she notes, ‘To look at him is also to look at ourselves, at what America valued and fought over during those years, what we tolerated and what we refused to see.’”

‘Stalin’
by Stephen Kotkin
Fredrik Logevall
Laurence D. Belfer Professor of International Affairs, Professor of History
“It’s challenging to write in-depth studies of terrible people,” Logevall said. “Somehow monsters must be made to be human and complex if we are to understand why they behaved as they did. One work that succeeds marvelously in this regard is Stephen Kotkin’s ‘Stalin,’ a multi-volume biography of the Soviet dictator.
“This is biography on a grand scale, a textured, analytically nuanced narrative drawing on immense research in a wide array of sources. It is, moreover, a true ‘life and times’ study, in which Kotkin uses his skills as historian to contextualize Stalin’s life, situating him within the broader environment in which he rose to power. In so doing, Kotkin adroitly balances the roles played by individual agency on the one hand, with deeper, structural forces on the other, while also revealing much about those with whom Stalin shared the stage, not least Vladimir Lenin and Leon Trotsky. The two volumes bring out what the third will also surely show, and what more recent history amply demonstrates: that if circumstances make the leader, the reverse can be no less true.”

‘King Leopold’s Ghost’
by Adam Hochschild
Louisa Thomas
Visiting Lecturer on English
“There are villains, and then there is King Leopold II, the man at the center of Adam Hochschild’s brilliant and disturbing account of the Belgian who seized the territory surrounding the Congo River, plundered it, and destroyed its people,” said Thomas. “‘King Leopold’s Ghost’ not only brought to light the long-overlooked crimes of the despot, but it also tells stories of people who suffered from them and of those who resisted them. That project is now, and always, necessary, if we are to remain aware of the moral dimension of human affairs.”
Harvard to advance corporate engagement strategy
Harvard to advance corporate engagement strategy

Roche Genentech Innovation Center Boston will be based at Harvard’s Enterprise Research Campus in Allston, which they toured during its construction phase in March.
Veasey Conway/Harvard Staff Photographer
Julie McDonough
Harvard Staff
Findings by 2 committees highlight opportunities for growth and expansion
Harvard is preparing to advance its corporate engagement strategy, based upon recommendations published last year by two ad hoc committees. Those committees found that the University could benefit from broadening and strengthening corporate engagement aligned with its core mission and values. Since assuming his role as provost, John Manning has continued to support this work as it has moved toward implementation.
The Corporate Relations Research Policy (CRRP) Committee, chaired by John Shaw, vice provost for research, and the Corporate Relations Researcher Engagement (CRRE) Committee, chaired by Amy Wagers, chair of the Department of Stem Cell and Regenerative Biology and the Forst Family Professor of Stem Cell and Regenerative Biology, undertook a review of the University’s current policies, processes, and support related to engaging with corporations. Supported through the Office of the Vice Provost for Research (OVPR), the committees published a series of recommendations aimed at better coordinating work across the University, exploring new ideas for engagement, and ensuring that students and faculty have the appropriate safeguards when engaging in corporate work.
“As a faculty member, and later as provost, I had witnessed the many benefits that can emerge when academic institutions and industry work together for the common good,” said President Alan M. Garber, who convened the committees in June 2023 as provost. “That’s why I asked Vice Provost for Research John Shaw to determine how we might facilitate those collaborations, including through the creation of these committees. Their work is enabling Harvard to leverage and create opportunities to both further our academic mission and push the frontiers of research, ultimately benefiting the public. I am excited to see the many ways in which our excellence will flourish as we implement the committees’ recommendations.”
For many years, Harvard has engaged with private corporations and related entities as a way to inform and strengthen its intellectual mission, support scholarship and students, and translate research discoveries to benefit society broadly. Current examples of corporate engagements include:
- Harvard’s relationship with Roche, recently strengthened by the announcement of the Roche Genentech Innovation Center Boston, based at Harvard’s Enterprise Research Campus in Allston.
- The Fujifilm fellowship, launched in 2019, awards up to two years of research funding to promising Ph.D. students across 14 programs spanning Harvard Medical School, the Harvard Kenneth C. Griffin Graduate School of Arts and Sciences, and the Harvard T. H. Chan School of Public Health.
- In 2022, Amazon Web Services (AWS) provided both sponsored and philanthropic support to advance fundamental research and innovation in quantum computing, as well as enable the AWS Impact Computing Project at the Harvard Data Science Institute, a collaboration aimed at reimagining data science to identify potential solutions for society’s most complex challenges.
- Through the efforts of the Office of Technology Development (OTD), a wide range of corporate sponsorships and University-wide research alliances with companies, including Deerfield Management, Tata Group, and UCB, have helped advance scientific discovery across the University.
The CRRP Committee was charged with assessing and envisioning mechanisms for advancing corporate engagement in research support. The CRRE Committee was charged with identifying the roles and responsibilities of faculty and other researchers engaged in corporate-sponsored research and recommending safeguards to ensure these relationships are aligned with the University’s mission and benefit all those who participate. Both committees had inclusive representation of faculty and staff from across Harvard’s Schools and central administration and reached their recommendations with the aid of input from students and other stakeholders across the University, as well as from corporate entities that have established agreements with the University.
“While Harvard has benefited from corporate engagement within various Schools, departments, and centers, the research done by CRRP showed that engagement could be strengthened by a University-wide strategy and approach,” said Shaw. “Expanding the mechanisms we have to enable corporate alliances, and further enhancing coordination across the University, will allow us to strengthen the ways we support research.”
“The CRRE Committee’s review of corporate engagement from a stakeholder perspective found that interest and participation in corporate partnerships have both expanded and evolved in recent years, offering exciting opportunities for creative and unique programs beyond the traditional sponsored research agreements and graduate fellowship programs,” said Wagers. “With appropriate policies and safeguards in place for our students, faculty, and other researchers, our Harvard community stands to realize great benefits from increased engagement with corporate partners.”

Steven Currall was named the executive director and associate vice provost for academic-corporate initiatives.
Photo by Ryan Noone/University of South Florida
New executive director and associate vice provost for academic-corporate initiatives to drive steering committee and implementation
One of the top recommendations to emerge from the committees was to establish a Corporate Relations Steering Committee as a resource to foster University-wide coordination of corporate engagement activities that support research and build upon institutional strengths and capacity. Envisioned as a small, nimble group, made up of faculty members engaged in corporate collaboration and leadership from OVPR, OTD, and the University Development Office (UDO), the steering committee will:
- Provide a University-wide strategy that considers the breadth of potential engagements including gifts, sponsored research, and new types of agreements;
- Provide guidance for complex corporate engagements that span many different forms of opportunity across the University, including those that contain gifts and sponsored research or other components;
- For instances of complex engagement, provide streamlining and support to ensure expedient and thorough review.
Other recommendations included expanding training and building awareness of policies and procedures, developing models and roadmaps for corporate engagement, creating a database of current engagements, and initiating pilot programs in priority research areas.
To lead this work, a core operational team of the steering committee has been established with leadership from Sam Liss (OTD), Anne Gotfredson (UDO), and Steven Currall (OVPR), who began June 16 as the new executive director and associate vice provost for academic-corporate initiatives. Currall served as special adviser to both committees, providing advisory support, data collection and analyses, and benchmarking. He is currently an associate in the John A. Paulson School of Engineering and Applied Sciences.
Prior to joining Harvard, Currall was dean of the Graduate School of Management at the University of California, Davis; provost and vice president for academic affairs at Southern Methodist University; and president of the University of South Florida. He previously served as a commissioner of the U.S. Council on Competitiveness, which is made up of university and corporate leaders committed to bolstering America’s investments in innovation, technology, and infrastructure. His publications have appeared in Nature, Nature Nanotechnology, Nature Reviews Bioengineering, Issues in Science and Technology, and leading management journals such as Organization Science.
“There is so much opportunity for Harvard to engage strategically with corporate entities across the University in ways that align with our academic mission and increase our ability to benefit society,” said Currall. “During our research for the committees, we heard from corporations who want to make a positive impact on the world and want to build a relationship with a university committed to the same goal.”
OVPR, in partnership with OTD, will launch this new effort with a series of workshops this summer to support faculty in advancing their research through corporate engagement. The workshops will also help clarify the current policies and mechanisms for corporate engagement, and begin to consider the opportunities for working strategically across the University.
“Corporate alliances play a vital role in advancing Harvard’s research programs and innovation,” said Vivian Berlin, executive director at Harvard Medical School and managing director of strategic partnerships at OTD. “At OTD, we look forward to continuing our support of Harvard researchers across the University to expand engagement with our corporate and venture collaborators.”
Following the workshops, one of the first initiatives the steering committee will undertake will be to identify and support innovative pilot projects, per the recommendations contained in the reports. These projects will foster connections between faculty-led research efforts across Schools in ways that allow Harvard to proactively identify and advance opportunities for investment and direct engagement in research from corporate entities. A call for faculty proposals is forthcoming.
“More extensive and better integration with corporate partners is critical to rapidly advancing our discoveries to the medicines that can help patients in need,” said Mark Namchuk, Puja and Samir Kaul Professor of the Practice of Biomedical Innovation and Translation and executive director of therapeutics translation. “This new effort will help faculty across the University to advance their own research and amplify its benefit to society as a whole.”
More information can be found on the Harvard Corporate Engagement website.
Fighting blackouts with mathematics
Building breast tissue in the lab to better understand lactation
ETH spin-offs sweep the board at Venture Awards
“Vertical extensions of buildings are becoming a key option for urban redevelopment”
Merging AI and underwater photography to reveal hidden ocean worlds
In the Northeastern United States, the Gulf of Maine represents one of the most biologically diverse marine ecosystems on the planet — home to whales, sharks, jellyfish, herring, plankton, and hundreds of other species. But even as this ecosystem supports rich biodiversity, it is undergoing rapid environmental change. The Gulf of Maine is warming faster than 99 percent of the world’s oceans, with consequences that are still unfolding.
A new research initiative developing at MIT Sea Grant, called LOBSTgER — short for Learning Oceanic Bioecological Systems Through Generative Representations — brings together artificial intelligence and underwater photography to document the ocean life left vulnerable to these changes and share them with the public in new visual ways. Co-led by underwater photographer and visiting artist at MIT Sea Grant Keith Ellenbogen and MIT mechanical engineering PhD student Andreas Mentzelopoulos, the project explores how generative AI can expand scientific storytelling by building on field-based photographic data.
Just as the 19th-century camera transformed our ability to document and reveal the natural world — capturing life with unprecedented detail and bringing distant or hidden environments into view — generative AI marks a new frontier in visual storytelling. Like early photography, AI opens a creative and conceptual space, challenging how we define authenticity and how we communicate scientific and artistic perspectives.
In the LOBSTgER project, generative models are trained exclusively on a curated library of Ellenbogen’s original underwater photographs — each image crafted with artistic intent, technical precision, accurate species identification, and clear geographic context. By building a high-quality dataset grounded in real-world observations, the project ensures that the resulting imagery maintains both visual integrity and ecological relevance. In addition, LOBSTgER’s models are built using custom code developed by Mentzelopoulos to protect the process and outputs from any potential biases from external data or models. LOBSTgER’s generative AI builds upon real photography, expanding the researchers’ visual vocabulary to deepen the public’s connection to the natural world.
At its heart, LOBSTgER operates at the intersection of art, science, and technology. The project draws from the visual language of photography, the observational rigor of marine science, and the computational power of generative AI. By uniting these disciplines, the team is not only developing new ways to visualize ocean life — they are also reimagining how environmental stories can be told. This integrative approach makes LOBSTgER both a research tool and a creative experiment — one that reflects MIT’s long-standing tradition of interdisciplinary innovation.
Underwater photography in New England’s coastal waters is notoriously difficult. Limited visibility, swirling sediment, bubbles, and the unpredictable movement of marine life all pose constant challenges. For the past several years, Ellenbogen has navigated these challenges and is building a comprehensive record of the region’s biodiversity through the project, Space to Sea: Visualizing New England’s Ocean Wilderness. This large dataset of underwater images provides the foundation for training LOBSTgER’s generative AI models. The images span diverse angles, lighting conditions, and animal behaviors, resulting in a visual archive that is both artistically striking and biologically accurate.
LOBSTgER’s custom diffusion models are trained to replicate not only the biodiversity Ellenbogen documents, but also the artistic style he uses to capture it. By learning from thousands of real underwater images, the models internalize fine-grained details such as natural lighting gradients, species-specific coloration, and even the atmospheric texture created by suspended particles and refracted sunlight. The result is imagery that not only appears visually accurate, but also feels immersive and moving.
The models can both generate new, synthetic, but scientifically accurate images unconditionally (i.e., requiring no user input/guidance), and enhance real photographs conditionally (i.e., image-to-image generation). By integrating AI into the photographic workflow, Ellenbogen will be able to use these tools to recover detail in turbid water, adjust lighting to emphasize key subjects, or even simulate scenes that would be nearly impossible to capture in the field. The team also believes this approach may benefit other underwater photographers and image editors facing similar challenges. This hybrid method is designed to accelerate the curation process and enable storytellers to construct a more complete and coherent visual narrative of life beneath the surface.
In one key series, Ellenbogen captured high-resolution images of lion’s mane jellyfish, blue sharks, American lobsters, and ocean sunfish (Mola mola) while free diving in coastal waters. “Getting a high-quality dataset is not easy,” Ellenbogen says. “It requires multiple dives, missed opportunities, and unpredictable conditions. But these challenges are part of what makes underwater documentation both difficult and rewarding.”
Mentzelopoulos has developed original code to train a family of latent diffusion models for LOBSTgER grounded on Ellenbogen’s images. Developing such models requires a high level of technical expertise, and training models from scratch is a complex process demanding hundreds of hours of computation and meticulous hyperparameter tuning.
The project reflects a parallel process: field documentation through photography and model development through iterative training. Ellenbogen works in the field, capturing rare and fleeting encounters with marine animals; Mentzelopoulos works in the lab, translating those moments into machine-learning contexts that can extend and reinterpret the visual language of the ocean.
“The goal isn’t to replace photography,” Mentzelopoulos says. “It’s to build on and complement it — making the invisible visible, and helping people see environmental complexity in a way that resonates both emotionally and intellectually. Our models aim to capture not just biological realism, but the emotional charge that can drive real-world engagement and action.”
LOBSTgER points to a hybrid future that merges direct observation with technological interpretation. The team’s long-term goal is to develop a comprehensive model that can visualize a wide range of species found in the Gulf of Maine and, eventually, apply similar methods to marine ecosystems around the world.
The researchers suggest that photography and generative AI form a continuum, rather than a conflict. Photography captures what is — the texture, light, and animal behavior during actual encounters — while AI extends that vision beyond what is seen, toward what could be understood, inferred, or imagined based on scientific data and artistic vision. Together, they offer a powerful framework for communicating science through image-making.
In a region where ecosystems are changing rapidly, the act of visualizing becomes more than just documentation. It becomes a tool for awareness, engagement, and, ultimately, conservation. LOBSTgER is still in its infancy, and the team looks forward to sharing more discoveries, images, and insights as the project evolves.
Answer from the lead image: The left image was generated using using LOBSTgER’s unconditional models and the right image is real.
For more information, contact Keith Ellenbogen and Andreas Mentzelopoulos.
© AI-generated image: Keith Ellenbogen, Andreas Mentzelopoulos, and LOBSTgER. Photo: Keith Ellenbogen
From MIT, an instruction manual for turning research into startups
Since MIT opened the first-of-its-kind venture studio within a university in 2019, it has demonstrated how a systemic process can help turn research into impactful ventures.
Now, MIT Proto Ventures is launching the “R&D Venture Studio Playbook,” a resource to help universities, national labs, and corporate R&D offices establish their own in-house venture studios. The online publication offers a comprehensive framework for building ventures from the ground up within research environments.
“There is a huge opportunity cost to letting great research sit idle,” says Fiona Murray, associate dean for innovation at the MIT Sloan School of Management and a faculty director for Proto Ventures. “The venture studio model makes research systematic, rather than messy and happenstance.”
Bigger than MIT
The new playbook arrives amid growing national interest in revitalizing the United States’ innovation pipeline — a challenge underscored by the fact that just a fraction of academic patents ever reach commercialization.
“Venture-building across R&D organizations, and especially within academia, has been based on serendipity,” says MIT Professor Dennis Whyte, a faculty director for Proto Ventures who helped develop the playbook. “The goal of R&D venture studios is to take away the aspect of chance — to turn venture-building into a systemic process. And this is something not just MIT needs; all research universities and institutions need it.”
Indeed, MIT Proto Ventures is actively sharing the playbook with peer institutions, federal agencies, and corporate R&D leaders seeking to increase the translational return on their research investments.
“We’ve been following MIT’s Proto Ventures model with the vision of delivering new ventures that possess both strong tech push and strong market pull,” says Mark Arnold, associate vice president of Discovery to Impact and managing director of Texas startups at The University of Texas at Austin. “By focusing on market problems first and creating ventures with a supportive ecosystem around them, universities can accelerate the transition of ideas from the lab into real-world solutions.”
What’s in the playbook
The playbook outlines the venture studio model process followed by MIT Proto Ventures. MIT’s venture studio embeds full-time entrepreneurial scientists — called venture builders — inside research labs. These builders work shoulder-to-shoulder with faculty and graduate students to scout promising technologies, validate market opportunities, and co-create new ventures.
“We see this as an open-source framework for impact,” says MIT Proto Ventures Managing Director Gene Keselman. “Our goal is not just to build startups out of MIT — it’s to inspire innovation wherever breakthrough science is happening.”
The playbook was developed by the MIT Proto Ventures team — including Keselman, venture builders David Cohen-Tanugi and Andrew Inglis, and faculty leaders Murray, Whyte, Andrew Lo, Michael Cima, and Michael Short.
“This problem is universal, so we knew if it worked there’d be an opportunity to write the book on how to build a translational engine,” Keselman said. “We’ve had enough success now to be able to say, ‘Yes, this works, and here are the key components.’”
In addition to detailing core processes, the playbook includes case studies, sample templates, and guidance for institutions seeking to tailor the model to fit their unique advantages. It emphasizes that building successful ventures from R&D requires more than mentorship and IP licensing — it demands deliberate, sustained focus, and a new kind of translational infrastructure.
How it works
A key part of MIT’s venture studio is structuring efforts into distinct tracks or problem areas — MIT Proto Ventures calls these channels. Venture builders work in a single track that aligns with their expertise and interest. For example, Cohen-Tanugi is embedded in the MIT Plasma Science and Fusion Center, working in the Fusion and Clean Energy channel. His first two venture successes have been a venture using superconducting magnets for in-space propulsion and a deep-tech startup improving power efficiency in data centers.
“This playbook is both a call to action and a blueprint,” says Cohen-Tanugi, lead author of the playbook. “We’ve learned that world-changing inventions often remain on the lab bench not because they lack potential, but because no one is explicitly responsible for turning them into businesses. The R&D venture studio model fixes that.”
© Photo courtesy of MIT Proto Ventures
Four from MIT named 2025 Goldwater Scholars
Four MIT rising seniors have been selected to receive a 2025 Barry Goldwater Scholarship, including Avani Ahuja and Jacqueline Prawira in the School of Engineering and Julianna Lian and Alex Tang from the School of Science. An estimated 5,000 college sophomores and juniors from across the United States were nominated for the scholarships, of whom only 441 were selected.
The Goldwater Scholarships have been conferred since 1989 by the Barry Goldwater Scholarship and Excellence in Education Foundation. These scholarships have supported undergraduates who go on to become leading scientists, engineers, and mathematicians in their respective fields.
Avani Ahuja, a mechanical engineering and electrical engineering major, conducts research in the Conformable Decoders group, where she is focused on developing a “wearable conformable breast ultrasound patch” that makes ultrasounds for breast cancer more accessible.
“Doing research in the Media Lab has had a huge impact on me, especially in the ways that we think about inclusivity in research,” Ahuja says.
In her research group, Ahuja works under Canan Dagdeviren, the LG Career Development Professor of Media Arts and Sciences. Ahuja plans to pursue a PhD in electrical engineering. She aspires to conduct research in electromechanical systems for women’s health applications and teach at the university level.
“I want to thank Professor Dagdeviren for all her support. It’s an honor to receive this scholarship, and it’s amazing to see that women’s health research is getting recognized in this way,” Ahuja says.
Julianna Lian studies mechanochemistry, organic, and polymer chemistry in the lab of Professor Jeremiah Johnson, the A. Thomas Guertin Professor of Chemistry. In addition to her studies, she serves the MIT community as an emergency medical technician (EMT) with MIT Emergency Medical Services, is a member of MIT THINK, and a ClubChem mentorship chair.
“Receiving this award has been a tremendous opportunity to not only reflect on how much I have learned, but also on the many, many people I have had the chance to learn from,” says Lian. “I am deeply grateful for the guidance, support, and encouragement of these teachers, mentors, and friends. And I am excited to carry forward the lasting curiosity and excitement for chemistry that they have helped inspire in me.”
Lian’s career goals post-graduation include pursuing a PhD in organic chemistry, to conduct research at the interface of synthetic chemistry and materials science, aided by computation, and to teach at the university level.
Jacqueline Prawira, a materials science and engineering major, joined the Center of Decarbonization and Electrification of Industry as a first-year Undergraduate Research Opportunities Program student and became a co-inventor on a patent and a research technician at spinout company Rock Zero. She has also worked in collaboration with Indigenous farmers and Diné College students on the Navajo Nation.
“I’ve become significantly more cognizant of how I listen to people and stories, the tangled messiness of real-world challenges, and the critical skills needed to tackle complex sustainability issues,” Prawira says.
Prawira is mentored by Yet-Ming Chiang, professor of materials science and engineering. Her career goals are to pursue a PhD in materials science and engineering and to research sustainable materials and processes to solve environmental challenges and build a sustainable society.
“Receiving the prestigious title of 2025 Goldwater Scholar validates my current trajectory in innovating sustainable materials and demonstrates my growth as a researcher,” Prawira says. “This award signifies my future impact in building a society where sustainability is the norm, instead of just another option.”
Alex Tang studies the effects of immunotherapy and targeted molecular therapy on the tumor microenvironment in metastatic colorectal cancer patients. He is supervised by professors Jonathan Chen at Northwestern University and Nir Hacohen at the Broad Institute of MIT and Harvard.
“My mentors and collaborators have been instrumental to my growth since I joined the lab as a freshman. I am incredibly grateful for the generous mentorship and support of Professor Hacohen and Professor Chen, who have taught me how to approach scientific investigation with curiosity and rigor,” says Tang. “I’d also like to thank my advisor Professor Adam Martin and first-year advisor Professor Angela Belcher for their guidance throughout my undergraduate career thus far. I am excited to carry forward this work as I progress in my career.” Tang intends to pursue physician-scientist training following graduation.
The Scholarship Program honoring Senator Barry Goldwater was designed to identify, encourage, and financially support outstanding undergraduates interested in pursuing research careers in the sciences, engineering, and mathematics. The Goldwater Scholarship is the preeminent undergraduate award of its type in these fields.
The tenured engineers of 2025
In 2025, MIT granted tenure to 11 faculty members across the School of Engineering. This year’s tenured engineers hold appointments in the departments of Aeronautics and Astronautics, Biological Engineering, Chemical Engineering, Electrical Engineering and Computer Science (EECS) — which reports jointly to the School of Engineering and MIT Schwarzman College of Computing — Materials Science and Engineering, Mechanical Engineering, and Nuclear Science and Engineering.
“It is with great pride that I congratulate the 11 newest tenured faculty members in the School of Engineering. Their dedication to advancing their fields, mentoring future innovators, and contributing to a vibrant academic community is truly inspiring,” says Anantha Chandrakasan, chief innovation and strategy officer, dean of engineering, and the Vannevar Bush Professor of Electrical Engineering and Computer Science who will assume the title of MIT provost July 1. “This milestone is not only a testament to their achievements, but a promise of even greater impact ahead.”
This year’s newly tenured engineering faculty include:
Bryan Bryson, the Phillip and Susan Ragon Career Development Professor in the Department of Biological Engineering, conducts research in infectious diseases and immunoengineering. He is interested in developing new tools to dissect the complex dynamics of bacterial infection at a variety of scales ranging from single cells to infected animals, sitting in both “reference frames” by taking both an immunologist’s and a microbiologist’s perspective.
Connor Coley is the Class of 1957 Career Development Professor and associate professor of chemical engineering, with a shared appointment in EECS. His research group develops new computational methods at the intersection of artificial intelligence and chemistry with relevance to small molecule drug discovery, chemical synthesis, and structure elucidation.
Mohsen Ghaffari is the Steven and Renee Finn Career Development Professor and an associate professor in the EECS. His research explores the theory of distributed and parallel computation. He has done influential work on a range of algorithmic problems, including generic derandomization methods for distributed computing and parallel computing, improved distributed algorithms for graph problems, sublinear algorithms derived via distributed techniques, and algorithmic and impossibility results for massively parallel computation.
Rafael Gomez-Bombarelli, the Paul M. Cook Development Professor and associate professor of materials science and engineering, works at the interface between machine learning and atomistic simulations. He uses computational tools to tackle design of materials in complex combinatorial search spaces, such as organic electronic materials, energy storage polymers and molecules, and heterogeneous (electro)catalysts.
Song Han, an associate professor in EECS, is a pioneer in model compression and TinyML. He has innovated in key areas of pruning quantization, parallelization, KV cache optimization, long-context learning, and multi-modal representation learning to minimize generative AI costs, and he designed the first hardware accelerator (EIE) to exploit weight sparsity.
Kaiming He, the Douglass Ross (1954) Career Development Professor of Software Technology and an associate professor in EECS, is best known for his work on deep residual networks (ResNets). His research focuses on building computer models that can learn representations and develop intelligence from and for the complex world, with the long-term goal of augmenting human intelligence with more capable artificial intelligence.
Phillip Isola, the Class of 1948 Career Development Professor and associate professor in EECS, studies computer vision, machine learning, and AI. His research aims to uncover fundamental principles of intelligence, with a particular focus on how models and representations of the world can be acquired through self-supervised learning, from raw sensory experience alone, and without the use of labeled data.
Mingda Li is the Class of 1947 Career Development Professor and an associate professor in the Department of Nuclear Science and Engineering. His research lies in characterization and computation.
Richard Linares is an associate professor in the Department of Aeronautics and Astronautics. His research focuses on astrodynamics, space systems, and satellite autonomy. Linares develops advanced computational tools and analytical methods to address challenges associated with space traffic management, space debris mitigation, and space weather modeling.
Jonathan Ragan-Kelley, an associate professor in EECS, has designed everything from tools for visual effects in movies to the Halide programming language that’s widely used in industry for photo editing and processing. His research focuses on high-performance computer graphics and accelerated computing, at the intersection of graphics with programming languages, systems, and architecture.
Arvind Satyanarayan is an associate professor in EECS. His research areas cover data visualization, human-computer interaction, and artificial intelligence and machine learning. He leads the MIT Visualization Group, which uses interactive data visualization as a petri dish to study intelligence augmentation — how computation can help amplify human cognition and creativity while respecting our agency.
© Photo: Lillie Paquette
Researchers present bold ideas for AI at MIT Generative AI Impact Consortium kickoff event
Launched in February of this year, the MIT Generative AI Impact Consortium (MGAIC), a presidential initiative led by MIT’s Office of Innovation and Strategy and administered by the MIT Stephen A. Schwarzman College of Computing, issued a call for proposals, inviting researchers from across MIT to submit ideas for innovative projects studying high-impact uses of generative AI models.
The call received 180 submissions from nearly 250 faculty members, spanning all of MIT’s five schools and the college. The overwhelming response across the Institute exemplifies the growing interest in AI and follows in the wake of MIT’s Generative AI Week and call for impact papers. Fifty-five proposals were selected for MGAIC’s inaugural seed grants, with several more selected to be funded by the consortium’s founding company members.
Over 30 funding recipients presented their proposals to the greater MIT community at a kickoff event on May 13. Anantha P. Chandrakasan, chief innovation and strategy officer and dean of the School of Engineering who is head of the consortium, welcomed the attendees and thanked the consortium’s founding industry members.
“The amazing response to our call for proposals is an incredible testament to the energy and creativity that MGAIC has sparked at MIT. We are especially grateful to our founding members, whose support and vision helped bring this endeavor to life,” adds Chandrakasan. “One of the things that has been most remarkable about MGAIC is that this is a truly cross-Institute initiative. Deans from all five schools and the college collaborated in shaping and implementing it.”
Vivek F. Farias, the Patrick J. McGovern (1959) Professor at the MIT Sloan School of Management and co-faculty director of the consortium with Tim Kraska, associate professor of electrical engineering and computer science in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), emceed the afternoon of five-minute lightning presentations.
Presentation highlights include:
“AI-Driven Tutors and Open Datasets for Early Literacy Education,” presented by Ola Ozernov-Palchik, a research scientist at the McGovern Institute for Brain Research, proposed a refinement for AI-tutors for pK-7 students to potentially decrease literacy disparities.
“Developing jam_bots: Real-Time Collaborative Agents for Live Human-AI Musical Improvisation,” presented by Anna Huang, assistant professor of music and assistant professor of electrical engineering and computer science, and Joe Paradiso, the Alexander W. Dreyfoos (1954) Professor in Media Arts and Sciences at the MIT Media Lab, aims to enhance human-AI musical collaboration in real-time for live concert improvisation.
“GENIUS: GENerative Intelligence for Urban Sustainability,” presented by Norhan Bayomi, a postdoc at the MIT Environmental Solutions Initiative and a research assistant in the Urban Metabolism Group, which aims to address the critical gap of a standardized approach in evaluating and benchmarking cities’ climate policies.
Georgia Perakis, the John C Head III Dean (Interim) of the MIT Sloan School of Management and professor of operations management, operations research, and statistics, who serves as co-chair of the GenAI Dean’s oversight group with Dan Huttenlocher, dean of the MIT Schwarzman College of Computing, ended the event with closing remarks that emphasized “the readiness and eagerness of our community to lead in this space.”
“This is only the beginning,” she continued. “We are at the front edge of a historic moment — one where MIT has the opportunity, and the responsibility, to shape the future of generative AI with purpose, with excellence, and with care.”
© Photo: Jiin Kang
Shining light on scientific superstar
Shining light on scientific superstar

The Vera C. Rubin Observatory, a new astronomy and astrophysics facility in Cerro Pachón, Chile.
Courtesy of Vera C. Rubin Observatory
Kermit Pattison
Harvard Staff Writer
Vera Rubin, whose dark-matter discoveries changed astronomy and physics, gets her due with namesake observatory, commemorative quarter
Nearly 80 years ago, a promising astronomy student named Vera Rubin passed up the opportunity for graduate study at Harvard. Now, a decade after her death, the pioneering astronomer will be celebrated on campus as a scientific superstar.
Rubin, whose discoveries about dark matter transformed astronomy and physics, will be honored with a weeklong series of events starting June 23, including the first public release of images from a new observatory bearing her name and the unveiling of a commemorative quarter.
“Intellectually, we’re all still staggering around with the consequences of the astronomy that she did,” said Christopher W. Stubbs, the Samuel C. Moncher Professor of Physics and of Astronomy and a member of the scientific team for the new observatory. “She brought scientific chaos that we’ve all been wrestling with ever since.”
The celebration will kick off Monday with a livestream of the first images from the Vera C. Rubin Observatory, a new astronomy and astrophysics facility in Cerro Pachón, Chile — a mountaintop site chosen because its aridity and its 2,600-meter altitude offer clear views of the sky.
Funded by the U.S. National Science Foundation and the U.S. Department of Energy, the 350-ton instrument is the most powerful survey telescope in the world and incorporates the largest digital camera ever constructed. It will take detailed images of the Southern Hemisphere sky to compile ultra-wide, ultra-high-definition, time-lapse video of the cosmos.
The first images will be released at 11 a.m. The main unveiling will take place at the National Academy of Sciences in Washington, D.C., and will be available online and to watch parties around the globe. The Harvard gathering will begin at 10:30 a.m. in Jefferson Lab 250.
More than two decades ago, Stubbs was among a group of scientists who won a federal grant to begin planning the new telescope. That proposal eventually grew into the $800 million observatory that will begin service this month after many twists and turns and collaborations with other institutions.
Stubbs said the new images will be spectacular.
“When you look at these pictures, you just kind of go, ‘Wow, look at all those galaxies!’” he said. “It’s like a wallpaper of galaxies — near ones, far ones, red ones, blue ones, interacting, colliding, different shapes, different sizes.”
The telescope will repeatedly sweep the sky in a 10-year survey. It will produce 20 terabytes of data every night and in one year will generate more optical astronomy data than all previous telescopes combined.

Vera Rubin measuring spectra in 1974.
Credit: Carnegie Institution for Science
“For solar system science, it’s a huge advance,” said Matt Holman, an astrophysicist at the Harvard-Smithsonian Center for Astrophysics. “It’s going to find nearly a factor of 10 more objects than we presently know, ranging from asteroids that we’re concerned might hit the Earth, to the most distant Kuiper Belt objects, and perhaps even planets in our solar system that we don’t know about.”
On Thursday, Harvard will host a series of talks and a science festival to celebrate Rubin. The event will coincide with the release of the Rubin quarter, part of a U.S. Mint program honoring influential American women.
Rubin is best known as the scientist who shined light on dark matter. Born in 1928, she became fascinated by astronomy as a child looking at stars outside her bedroom window.
She studied astronomy at Vassar College and won admission to the graduate program at Harvard, but chose to study at Cornell because her new husband was enrolled there.
As she later recalled, the director of the Harvard observatory sent a formal letter acknowledging her withdrawal and added a handwritten note: “Damn you women. Every time I get a good one ready, she goes off and gets married.”
Later, Rubin earned a Ph.D. from Georgetown and studied the properties and motions of distant galaxies.
As a female scientist, she repeatedly encountered condescension from male colleagues and difficulty accessing scientific facilities and conferences. She spent most of her career as a researcher at the Carnegie Institute in Washington, D.C., and raised four children, all of whom became scientists.
She is best known for her work showing that most of the universe is invisible. Her calculations showed that galaxies must contain at least five to 10 times more mass than can be observed directly based on the light emitted by ordinary matter.
By the time she died in 2016 at age 88, her discoveries had been largely affirmed.
“First of all, she made an amazing discovery, and anyone of any background who does that is worthy of respect,” said Elana Urbach, a Harvard postdoctoral researcher who works with data from the new Rubin observatory and is organizing the campus celebration. “But the fact that she made this amazing discovery with the adversity that she faced does add something to her story — and make her more of a role model.”
A taste for microbes
A taste for microbes
A video of a brooding octopus mother interacting with a fake egg that was doped with a microbial molecule isolated from rejected octopus egg bacteria. The mother uses her siphon to eject the egg from her clutch.
Kermit Pattison
Harvard Staff Writer
New research reveals how the octopus uses arms to sense chemical clues from microbiomes
The octopus is a creature with sensitive feelings.
Most of its 500 million neurons are in its arms, which explore the seafloor like eight muscular tongues. It navigates the deep with a “taste by touch” nervous system powered by 10,000 sensory cells in each individual suction cup.
Now, a new study by Harvard biologists reveals part of what the octopus is feeling — biochemical information from the microbial world. By tasting the biochemicals emitted by ever-changing bacterial communities, the animal gains information essential for survival, such as whether prey is safe to eat or whether unhealthy eggs should be ejected from the nest.
“Everything is coated by microbes, especially in these underwater worlds,” said Rebecka Sepela, a postdoctoral researcher and lead author of the new study. “These microbial communities are constantly restructuring in response to environmental conditions and will pump out different chemicals to reflect their surface-specific surroundings. The octopus senses the chemicals made by certain microbes, such as those growing on the surfaces of crabs or eggs, to distinguish the vitals of these surfaces.”
The sensory system of the octopus has been a topic of ongoing research at Harvard. In 2020, researchers in the lab of Nicholas Bellono, a professor of molecular and cellular biology, detailed how “chemotactile receptors” armed octopuses with their unique taste-by-touch capability. In 2023, the group described how these sensory organs had evolved from the acetylcholine receptors of their ancestors — but differently in octopuses than in their cephalopod relative the squid.

The California two-spot octopus, octopus bimaculoides.

Octopuses use chemotactile receptors to sense their surroundings.
Photo by Anik Grearson.

The California two-spot octopus incubates a clutch of eggs in her den.
Photo by Anik Grearson.
For the latest study, published Tuesday in the journal Cell, the Bellono team sought to better understand just what these organs were sensing. Octopuses forage by sweeping their arms over the seafloor and probing nooks and crannies for food. Even in the dark, they “blind feed” by relying only on the senses of their appendages. But it remained unclear just how they identified prey and other objects of interest.
To shed light on that question, the Harvard researchers simply let the animals show them what was important. The lab follows a “curiosity-based approach” of investigating biological novelties and trying to decipher the underlying mechanisms down to the level of molecules and proteins. It keeps California two-spot octopuses (a species native to the Pacific coast of the Americas) in saltwater tanks — with the lids fastened tight with Velcro straps and weighed by bricks. “We’ve had them open their tanks and get out,” explained Bellono.
In watching the octopuses, the researchers saw that two objects elicited strong reactions — the shells of fiddler crabs (a favorite food) and octopus eggs.
“It was very octopus-centric,” said Sepela. “By keeping the animal at the center of our study, we were able to find molecules in the environment that are actually meaningful to the animal.”
“By keeping the animal at the center of our study, we were able to find molecules in the environment that are actually meaningful to the animal.”
Rebecka Sepela
The researchers found that octopuses happily fed on live crabs, but rejected decayed ones. Octopus mothers avidly cleaned and groomed their clutches of eggs, but sometimes ejected infertile or dead eggs.
When the scientists examined these materials under an electron microscope, they found stark differences in microbial communities. Live crabs had only a few microbes on their shells, but decaying crabs were coated by many types of bacteria. Likewise, eggs rejected by octopus mothers were covered by spirillum-shaped bacteria while healthy eggs were not.

The scientists used RNA barcoding to reveal the taxonomic identities and abundances of these microbial communities before examining the molecules emitted by the microbes — and the responses these substances elicited in the octopus. The team cultivated nearly 300 strains of marine bacteria and tested their effects on octopus chemotactile receptors that had been cloned in the lab.
They discovered that certain microbes activated certain octopus receptors. In one dramatic finding, the scientists identified a molecule emitted by bacteria commonly found on eggs rejected by the mother octopus. Researchers made a fake egg, coated it with the substance, and dropped it into an octopus nest. After briefly grooming the egg, the mother ejected it from her brood.
Microbes — or single-celled organisms — are the most abundant creatures on Earth. The body of a single human hosts around 39 trillion microbes. Likewise, the Earth, waters, and even the air teem with microbial communities known as microbiomes.

Rebecka Sepela and Nicholas Bellono.
Niles Singer/Harvard Staff Photographer
Research on microbiomes focuses on the relationship between microbes and their hosts — how gut bacteria aid in digestion, for example — but the new paper explores a lesser-known realm: how animals interact with external microbes and adapt to an ever-changing world. Science has only a murky understanding of how multicellular animals read this outside microbiome.
“There is a lot more to be explored,” said Bellono. “Microbes are present on almost every surface. We had a nice system to look at this in the octopus, but that doesn’t mean it’s not happening across life.”
The Bellono Lab collaborated on the research with the teams of Jon Clardy, a professor of biological chemistry and molecular pharmacology at Harvard Medical School, and Ryan Hibbs, a professor of neurobiology at the University of California, San Diego.
Cosmic signal from the very early universe will help astronomers detect the first stars

Now, an international group of astronomers led by the University of Cambridge have shown that we will be able to learn about the masses of the earliest stars by studying a specific radio signal – created by hydrogen atoms filling the gaps between star-forming regions – originating just a hundred million years after the Big Bang.
By studying how the first stars and their remnants affected this signal, called the 21-centimetre signal, the researchers have shown that future radio telescopes will help us understand the very early universe, and how it transformed from a nearly homogeneous mass of mostly hydrogen to the incredible complexity we see today. Their results are reported in the journal Nature Astronomy.
“This is a unique opportunity to learn how the universe’s first light emerged from the darkness,” said co-author Professor Anastasia Fialkov from Cambridge’s Institute of Astronomy. “The transition from a cold, dark universe to one filled with stars is a story we’re only beginning to understand.”
The study of the universe’s most ancient stars hinges on the faint glow of the 21-centimetre signal, a subtle energy signal from over 13 billion years ago. This signal, influenced by the radiation from early stars and black holes, provides a rare window into the universe’s infancy.
Fialkov leads the theory group of REACH (the Radio Experiment for the Analysis of Cosmic Hydrogen). REACH is a radio antenna and is one of two major projects that could help us learn about the Cosmic Dawn and the Epoch of Reionisation, when the first stars reionised neutral hydrogen atoms in the universe.
Although REACH, which captures radio signals, is still in its calibration stage, it promises to reveal data about the early universe. Meanwhile, the Square Kilometre Array (SKA)—a massive array of antennas under construction—will map fluctuations in cosmic signals across vast regions of the sky.
Both projects are vital in probing the masses, luminosities, and distribution of the universe's earliest stars. In the current study, Fialkov – who is also a member of the SKA – and her collaborators developed a model that makes predictions for the 21-centimetre signal for both REACH and SKA, and found that the signal is sensitive to the masses of first stars.
“We are the first group to consistently model the dependence of the 21-centimetre signal of the masses of the first stars, including the impact of ultraviolet starlight and X-ray emissions from X-ray binaries produced when the first stars die,” said Fialkov, who is also a member of Cambridge’s Kavli Institute for Cosmology. “These insights are derived from simulations that integrate the primordial conditions of the universe, such as the hydrogen-helium composition produced by the Big Bang.”
In developing their theoretical model, the researchers studied how the 21-centimetre signal reacts to the mass distribution of the first stars, known as Population III stars. They found that previous studies have underestimated this connection as they did not account for the number and brightness of X-ray binaries – binary systems made of a normal star and a collapsed star – among Population III stars, and how they affect the 21-centimetre signal.
Unlike optical telescopes like the James Webb Space Telescope, which capture vivid images, radio astronomy relies on statistical analysis of faint signals. REACH and SKA will not be able to image individual stars, but will instead provide information about entire populations of stars, X-ray binary systems and galaxies.
“It takes a bit of imagination to connect radio data to the story of the first stars, but the implications are profound,” said Fialkov.
“The predictions we are reporting have huge implications for our understanding of the nature of the very first stars in the Universe,” said co-author Dr Eloy de Lera Acedo, Principal Investigator of the REACH telescope and PI at Cambridge of the SKA development activities. “We show evidence that our radio telescopes can tell us details about the mass of those first stars and how these early lights may have been very different from today’s stars.
“Radio telescopes like REACH are promising to unlock the mysteries of the infant Universe, and these predictions are essential to guide the radio observations we are doing from the Karoo, in South Africa.”
The research was supported in part by the Science and Technology Facilities Council (STFC), part of UK Research and Innovation (UKRI). Anastasia Fialkov is a Fellow of Magdalene College, Cambridge. Eloy de Lera Acedo is an STFC Ernest Rutherford Fellow and a Fellow of Selwyn College, Cambridge.
Reference:
T. Gessey-Jones et al. ‘Determination of the mass distribution of the first stars from the 21-cm signal.’ Nature Astronomy (2024). DOI: 10.1038/s41550-025-02575-x
Understanding how the universe transitioned from darkness to light with the formation of the first stars and galaxies is a key turning point in the universe’s development, known as the Cosmic Dawn. However, even with the most powerful telescopes, we can’t directly observe these earliest stars, so determining their properties is one of the biggest challenges in astronomy.
The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.
“Cold spray” 3D printing technique proves effective for on-site bridge repair
More than half of the nation’s 623,218 bridges are experiencing significant deterioration. Through an in-field case study conducted in western Massachusetts, a team led by the University of Massachusetts at Amherst in collaboration with researchers from the MIT Department of Mechanical Engineering (MechE) has just successfully demonstrated that 3D printing may provide a cost-effective, minimally disruptive solution.
“Anytime you drive, you go under or over a corroded bridge,” says Simos Gerasimidis, associate professor of civil and environmental engineering at UMass Amherst and former visiting professor in the Department of Civil and Environmental Engineering at MIT, in a press release. “They are everywhere. It’s impossible to avoid, and their condition often shows significant deterioration. We know the numbers.”
The numbers, according to the American Society of Civil Engineers’ 2025 Report Card for America’s Infrastructure, are staggering: Across the United States, 49.1 percent of the nation’s 623,218 bridges are in “fair” condition and 6.8 percent are in “poor” condition. The projected cost to restore all of these failing bridges exceeds $191 billion.
A proof-of-concept repair took place last month on a small, corroded section of a bridge in Great Barrington, Massachusetts. The technique, called cold spray, can extend the life of beams, reinforcing them with newly deposited steel. The process accelerates particles of powdered steel in heated, compressed gas, and then a technician uses an applicator to spray the steel onto the beam. Repeated sprays create multiple layers, restoring thickness and other structural properties.
This method has proven to be an effective solution for other large structures like submarines, airplanes, and ships, but bridges present a problem on a greater scale. Unlike movable vessels, stationary bridges cannot be brought to the 3D printer — the printer must be brought on-site — and, to lessen systemic impacts, repairs must also be made with minimal disruptions to traffic, which the new approach allows.
“Now that we’ve completed this proof-of-concept repair, we see a clear path to a solution that is much faster, less costly, easier, and less invasive,” says Gerasimidis. “To our knowledge, this is a first. Of course, there is some R&D that needs to be developed, but this is a huge milestone to that.”
“This is a tremendous collaboration where cutting-edge technology is brought to address a critical need for infrastructure in the commonwealth and across the United States,” says John Hart, Class of 1922 Professor and head of the Department of MechE at MIT. Hart and Haden Quinlan, senior program manager in the Center for Advanced Production Technologies at MIT, are leading MIT’s efforts in in the project. Hart is also faculty co-lead of the recently announced MIT Initiative for New Manufacturing.
“Integrating digital systems with advanced physical processing is the future of infrastructure,” says Quinlan. “We’re excited to have moved this technology beyond the lab and into the field, and grateful to our collaborators in making this work possible.”
UMass says the Massachusetts Department of Transportation (MassDOT) has been a valued research partner, helping to identify the problem and providing essential support for the development and demonstration of the technology. Technical guidance and funding support were provided by the MassDOT Highway Division and the Research and Technology Transfer Program.
Equipment for this project was supported through the Massachusetts Manufacturing Innovation Initiative, a statewide program led by the Massachusetts Technology Collaborative (MassTech)’s Center for Advanced Manufacturing that helps bridge the gap between innovation and commercialization in hard tech manufacturing.
“It’s a very Massachusetts success story,” Gerasimidis says. “It involves MassDOT being open-minded to new ideas. It involves UMass and MIT putting [together] the brains to do it. It involves MassTech to bring manufacturing back to Massachusetts. So, I think it’s a win-win for everyone involved here.”
The bridge in Great Barrington is scheduled for demolition in a few years. After demolition occurs, the recently-sprayed beams will be taken back to UMass for testing and measurement to study how well the deposited steel powder adhered to the structure in the field compared to in a controlled lab setting, if it corroded further after it was sprayed, and determine its mechanical properties.
This demonstration builds on several years of research by the UMass and MIT teams, including development of a “digital thread” approach to scan corroded beam surfaces and determine material deposition profiles, alongside laboratory studies of cold spray and other additive manufacturing approaches that are suited to field deployment.
Altogether, this work is a collaborative effort among UMass Amherst, MIT MechE, MassDOT, the Massachusetts Technology Collaborative (MassTech), the U.S. Department of Transportation, and the Federal Highway Administration. Research reports are available on the MassDOT website.
© Photo: Alexia Cota/UMass Amherst
Placenta and hormone levels in the womb may have been key driver in human evolution

Dr Alex Tsompanidis, senior researcher at the Autism Research Centre in the University of Cambridge, and the lead author of this new study, said: “Small variations in the prenatal levels of steroid hormones, like testosterone and oestrogen, can predict the rate of social and cognitive learning in infants and even the likelihood of conditions such as autism. This prompted us to consider their relevance for human evolution.”
One explanation for the evolution of the human brain may be in the way humans adapted to be social. Professor Robin Dunbar, an Evolutionary Biologist at the University of Oxford and joint senior author of this new study said: “We’ve known for a long time that living in larger, more complex social groups is associated with increases in the size of the brain. But we still don’t know what mechanisms may link these behavioural and physical adaptations in humans.”
In this new paper, published today in Evolutionary Anthropology, the researchers now propose that the mechanism may be found in prenatal sex steroid hormones, such as testosterone or oestrogens, and the way these affect the developing brain and behaviour in humans.
Using ‘mini-brains’ – clusters of human neuronal cells that are grown in a petri dish from donors’ stem cells – other scientists have been able to study, for the first time, the effects of these hormones on the human brain. Recent discoveries have shown that testosterone can increase the size of the brain, while oestrogens can improve the connectivity between neurons.
In both humans and other primates such as chimpanzees and gorillas, the placenta can link the mother’s and baby’s endocrine systems to produce these hormones in varying amounts.
Professor Graham Burton, Founding Director of the Loke Centre of Trophoblast Research at the University of Cambridge and coauthor of the new paper, said: “The placenta regulates the duration of the pregnancy and the supply of nutrients to the fetus, both of which are crucial for the development of our species’ characteristically large brains. But the advantage of human placentas over those of other primates has been less clear.”
Two previous studies show that levels of oestrogen during pregnancy are higher in human pregnancies than in other primate species.
Another characteristic of humans as a species is our ability to form and maintain large social groups, larger than other primates and other extinct species, such as Neanderthals. But to be able to do this, humans must have adapted in ways that maintain high levels of fertility, while also reducing competition in large groups for mates and resources.
Prenatal sex steroid hormones, such as testosterone and oestrogen, are also important for regulating the way males and females interact and develop, a process known as sex differentiation. For example, having higher testosterone relative to oestrogen leads to more male-like features in anatomy (e.g., in physical size and strength) and in behaviour (e.g., in competition).
But in humans, while these on-average sex differences exist, they are reduced, compared to our closest primate relatives and relative to other extinct human species (such as the Neanderthals). Instead, anatomical features that are specific to humans appear to be related more to aspects of female rather than male biology, and to the effects of oestrogens (e.g., reduced body hair, and a large ratio between the second and fourth digit).
The researchers propose that the key to explain this may lie again with the placenta, which rapidly turns testosterone to oestrogens, using an enzyme called aromatase. Recent discoveries show that humans have higher levels of aromatase compared to macaques, and that males may have slightly higher levels compared to females.
Bringing all these lines of evidence together, the authors propose that high levels of prenatal sex steroid hormones in the womb, combined with increased placental function, may have made human brains larger and more interconnected. At the same time, a lower ratio of androgens (like testosterone) to oestrogens may have led to reductions in competition between males, while also improving fertility in females, allowing humans to form larger, more cohesive social groups.
Professor Simon Baron-Cohen, Director of the Autism Research Centre at the University of Cambridge and joint senior author on the paper, said: “We have been studying the effects of prenatal sex steroids on neurodevelopment for the past 20 years. This has led to the discovery that prenatal sex steroids are important for neurodiversity in human populations. This new hypothesis takes this further in arguing that these hormones may have also shaped the evolution of the human brain.”
Dr Tsompanidis added: “Our hypothesis puts pregnancy at the heart of our story as a species. The human brain is remarkable and unique, but it does not develop in a vacuum. Adaptations in the placenta and the way it produces sex steroid hormones may have been crucial for our brain’s evolution, and for the emergence of the cognitive and social traits that make us human.”
Reference
Tsompanidis, A et al. The placental steroid hypothesis of human brain evolution. Evolutionary Anthropology; 20 June 2025; DOI: 10.1002/evan.70003
The placenta and the hormones it produces may have played a crucial role in the evolution of the human brain, while also leading to the behavioural traits that have made human societies able to thrive and expand, according to a new hypothesis proposed by researchers from the Universities of Cambridge and Oxford.
The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.
Education key to tackling climate misinformation, say Cambridge experts

Representatives from Cambridge University Press & Assessment, Cambridge Zero, Cambridge Institute for Sustainability Leadership and Cambridge Judge Business School convened the session and were joined by a range of experts working on climate change-related research and education. Every speaker from across higher education highlighted the importance of identifying misinformation and disinformation in tackling climate action. Read more about the workshop.
University of Cambridge experts highlighted the key role of education in combatting climate misinformation at a Global Sustainable Development Congress (GDSC) workshop in Turkey.
The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.
Memory safety is at a tipping point
Social security numbers stolen. Public transport halted. Hospital systems frozen until ransoms are paid. These are some of the damaging consequences of unsecure memory in computer systems.
Over the past decade, public awareness of such cyberattacks has intensified, as their impacts have harmed individuals, corporations, and governments. Today, this awareness is coinciding with technologies that are finally mature enough to eliminate vulnerabilities in memory safety.
"We are at a tipping point — now is the right time to move to memory-safe systems," says Hamed Okhravi, a cybersecurity expert in MIT Lincoln Laboratory’s Secure Resilient Systems and Technology Group.
In an op-ed earlier this year in Communications of the ACM, Okhravi joined 20 other luminaries in the field of computer security to lay out a plan for achieving universal memory safety. They argue for a standardized framework as an essential next step to adopting memory-safety technologies throughout all forms of computer systems, from fighter jets to cell phones.
Memory-safety vulnerabilities occur when a program performs unintended or erroneous operations in memory. Such operations are prevalent, accounting for an estimated 70 percent of software vulnerabilities. If attackers gain access to memory, they can potentially steal sensitive information, alter program execution, or even take control of the computer system.
These vulnerabilities exist largely because common software programming languages, such as C or C++, are inherently memory-insecure. A simple error by a software engineer, perhaps one line in a system’s multimillion lines of code, could be enough for an attacker to exploit. In recent years, new memory-safe languages, such as Rust, have been developed. But rewriting legacy systems in new, memory-safe languages can be costly and complicated.
Okhravi focuses on the national security implications of memory-safety vulnerabilities. For the U.S. Department of Defense (DoD), whose systems comprise billions of lines of legacy C or C++ code, memory safety has long been a known problem. The National Security Agency (NSA) and the federal government have recently urged technology developers to eliminate memory-safety vulnerabilities from their products. Security concerns extend beyond military systems to widespread consumer products.
"Cell phones, for example, are not immediately important for defense or war-fighting, but if we have 200 million vulnerable cell phones in the nation, that’s a serious matter of national security," Okhravi says.
Memory-safe technology
In recent years, several technologies have emerged to help patch memory vulnerabilities in legacy systems. As the guest editor for a special issue of IEEE Security and Privacy, Okhravi solicited articles from top contributors in the field to highlight these technologies and the ways they can build on one another.
Some of these memory-safety technologies have been developed at Lincoln Laboratory, with sponsorship from DoD agencies. These technologies include TRACER and TASR, which are software products for Windows and Linux systems, respectively, that reshuffle the location of code in memory each time a program accesses it, making it very difficult for attackers to find exploits. These moving-target solutions have since been licensed by cybersecurity and cloud services companies.
"These technologies are quick wins, enabling us to make a lot of immediate impact without having to rebuild the whole system. But they are only a partial solution, a way of securing legacy systems while we are transitioning to safer languages," Okhravi says.
Innovative work is underway to make that transition easier. For example, the TRACTOR program at the U.S. Defense Advanced Research Projects Agency is developing artificial intelligence tools to automatically translate legacy C code to Rust. Lincoln Laboratory researchers will test and evaluate the translator for use in DoD systems.
Okhravi and his coauthors acknowledged in their op-ed that the timeline for full adoption of memory-safe systems is long — likely decades. It will require the deployment of a combination of new hardware, software, and techniques, each with their own adoption paths, costs, and disruptions. Organizations should prioritize mission-critical systems first.
"For example, the most important components in a fighter jet, such as the flight-control algorithm or the munition-handling logic, would be made memory-safe, say, within five years," Okhravi says. Subsystems less important to critical functions would have a longer time frame.
Use of memory-safe programming languages at Lincoln Laboratory
As Lincoln Laboratory continues its leadership in advancing memory-safety technologies, the Secure Resilient Systems and Technology Group has prioritized adopting memory-safe programming languages. "We’ve been investing in the group-wide use of Rust for the past six years as part of our broader strategy to prototype cyber-hardened mission systems and high-assurance cryptographic implementations for the DoD and intelligence community," says Roger Khazan, who leads the group. "Memory safety is fundamental to trustworthiness in these systems."
Rust’s strong guarantees around memory safety, along with its speed and ability to catch bugs early during development, make it especially well-suited for building secure and reliable systems. The laboratory has been using Rust to prototype and transition secure components for embedded, distributed, and cryptographic systems where resilience, performance, and correctness are mission-critical.
These efforts support both immediate U.S. government needs and a longer-term transformation of the national security software ecosystem. "They reflect Lincoln Laboratory’s broader mission of advancing technology in service to national security, grounded in technical excellence, innovation, and trust," Khazan adds.
A technology-agnostic framework
As new computer systems are designed, developers need a framework of memory-safety standards guiding them. Today, attempts to request memory safety in new systems are hampered by the lack of a clear set of definitions and practice.
Okhravi emphasizes that this standardized framework should be technology-agnostic and provide specific timelines with sets of requirements for different types of systems.
"In the acquisition process for the DoD, and even the commercial sector, when we are mandating memory safety, it shouldn’t be tied to a specific technology. It should be generic enough that different types of systems can apply different technologies to get there," he says.
Filling this gap not only requires building industrial consensus on technical approaches, but also collaborating with government and academia to bring this effort to fruition.
The need for collaboration was an impetus for the op-ed, and Okhravi says that the consortium of experts will push for standardization from their positions across industry, government, and academia. Contributors to the paper represent a wide range of institutes, from the University of Cambridge and SRI International to Microsoft and Google. Together, they are building momentum to finally root out memory vulnerabilities and the costly damages associated with them.
"We are seeing this cost-risk trade-off mindset shifting, partly because of the maturation of technology and partly because of such consequential incidents,” Okhravi says. "We hear all the time that such-and-such breach cost billions of dollars. Meanwhile, making the system secure might have cost 10 million dollars. Wouldn’t we have been better off making that effort?"
© Image: Tammy Ko
The MIT Press acquires University Science Books from AIP Publishing
The MIT Press announces the acquisition of textbook publisher University Science Books from AIP Publishing, a subsidiary of the American Institute of Physics (AIP).
University Science Books was founded in 1978 to publish intermediate- and advanced-level science and reference books by respected authors, published with the highest design and production standards, and priced as affordably as possible. Over the years, USB’s authors have acquired international followings, and its textbooks in chemistry, physics, and astronomy have been recognized as the gold standard in their respective disciplines. USB was acquired by AIP Publishing in 2021.
Bestsellers include John Taylor’s “Classical Mechanics,” the No. 1 adopted text for undergrad mechanics courses in the United States and Canada, and his “Introduction to Error Analysis;” and Don McQuarrie’s “Physical Chemistry: A Molecular Approach” (commonly known as “Big Red”), the second-most adopted physical chemistry textbook in the U.S.
“We are so pleased to have found a new home for USB’s prestigious list of textbooks in the sciences,” says Alix Vance, CEO of AIP Publishing. “With its strong STEM focus, academic rigor, and high production standards, the MIT Press is the perfect partner to continue the publishing legacy of University Science Books.”
“This acquisition is both a brand and content fit for the MIT Press,” says Amy Brand, director and publisher of the MIT Press. “USB’s respected science list will complement our long-established publishing history of publishing foundational texts in computer science, finance, and economics.”
The MIT Press will take over the USB list as of July 1, with inventory transferring to Penguin Random House Publishing Services, the MIT Press’ sales and distribution partner.
For details regarding University Science Books titles, inventory, and how to order, please contact the MIT Press.
Established in 1962, The MIT Press is one of the largest and most distinguished university presses in the world and a leading publisher of books and journals at the intersection of science, technology, art, social science, and design.
AIP Publishing is a wholly owned not-for-profit subsidiary of the AIP and supports the charitable, scientific, and educational purposes of AIP through scholarly publishing activities on its behalf and on behalf of our publishing partners.
© Image courtesy of the MIT Press.
Learning to thrive in diverse African habitats allowed early humans to spread across the world

Today, all non-Africans are known to have descended from a small group of people that ventured into Eurasia around 50,000 years ago. However, fossil evidence shows that there were numerous failed dispersals before this time that left no detectable traces in living people.
In a new study published today in the journal in Nature, scientists say that from around 70,000 years ago, early humans began to exploit different habitat types in Africa in ways not seen before.
At this time, our ancestors started to live in the equatorial forests of West and Central Africa, and in the Sahara and Sahel desert regions of North Africa, where they encountered a range of new environmental conditions.
As they adapted to life in these diverse habitats, early humans gained the flexibility to tackle the range of novel environmental conditions they would encounter during their expansion out of Africa.
This increase in the human niche may have been the result of social adaptations, such as long-distance social networks, which allowed for an increase in cultural exchange. The process would have been self-reinforcing: as people started to inhabit a wider proportion of the African continent, regions previously disconnected would have come into contact, leading to further exchanges and possibly even greater flexibility. The final outcome was that our species became the ultimate generalist, able to tackle a wider range of environments.
Andrea Manica, Professor of Evolutionary Ecology in the University of Cambridge’s Department of Zoology, who co-led the study with Professor Eleanor Scerri from the Max Plank Institute of Bioanthropology in Germany, said: “Around 70,000-50,000 years ago, the easiest route out of Africa would have been more challenging than during previous periods, and yet this expansion was big - and ultimately successful.”
Manica added: “It’s incredibly exciting that we were able to look back in time and pinpoint the changes that enabled our ancestors to successfully migrate out of Africa.”
Dr Emily Hallett of Loyola University Chicago, co-lead author of the study, said: “We assembled a dataset of archaeological sites and environmental information covering the last 120,000 years in Africa. We used methods developed in ecology to understand changes in human environmental niches - the habitats humans can use and thrive in - during this time.”
Dr Michela Leonardi at the University of Cambridge and London’s Natural History Museum, the study’s other lead author, said: “Our results showed that the human niche began to expand significantly from 70,000 years ago, and that this expansion was driven by humans increasing their use of diverse habitat types, from forests to arid deserts.”
Many explanations for the uniquely successful dispersal out of Africa have previously been made, from technological innovations, to immunities granted by interbreeding with Eurasian hominins. But there is no evidence of technological innovation, and previous interbreeding does not appear to have helped the long-term success of previous attempts to spread out of Africa.
“Unlike previous humans dispersing out of Africa, those human groups moving into Eurasia after around 60-50,000 years ago were equipped with a distinctive ecological flexibility as a result of coping with climatically challenging habitats,” said Scerri. “This likely provided a key mechanism for the adaptive success of our species beyond their African homeland.”
Previous human dispersals out of Africa - which were not successful in the long term - seem to have happened during particularly favourable windows of increased rainfall in the Saharo-Arabian desert belt, which created ‘green corridors’ for people to move into Eurasia.
The environmental flexibility developed in Africa from around 70,000 years ago ultimately resulted in modern humans’ unique ability to adapt and thrive in diverse environments, and to cope with varying environmental conditions throughout life.
This research was supported by funding from the Max Planck Society, European Research Council and Leverhulme Trust.
Adapted from a press release by the Max Planck Institute of Geoanthropology, Germany
Reference: Hallett, E. Y. et al: ‘Major expansion in the human niche preceded out of Africa dispersal.’ Nature, June 2025. DOI: 10.1038/s41586-025-09154-0.
Before the ‘Out of Africa’ migration that led our ancestors into Eurasia and beyond, human populations learned to adapt to new and challenging habitats including African forests and deserts, which was key to the long-term success of our species’ dispersal.
The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.
Faculty-led startup is using AI to reimagine software reliability
Avant-garde film scholar P. Adams Sitney, ‘cartographer of the unseen,’ dies at 80
Researchers present bold ideas for AI at MIT Generative AI Impact Consortium kickoff event
Launched in February of this year, the MIT Generative AI Impact Consortium (MGAIC), a presidential initiative led by MIT’s Office of Innovation and Strategy and administered by the MIT Stephen A. Schwarzman College of Computing, issued a call for proposals, inviting researchers from across MIT to submit ideas for innovative projects studying high-impact uses of generative AI models.
The call received 180 submissions from nearly 250 faculty members, spanning all of MIT’s five schools and the college. The overwhelming response across the Institute exemplifies the growing interest in AI and follows in the wake of MIT’s Generative AI Week and call for impact papers. Fifty-five proposals were selected for MGAIC’s inaugural seed grants, with several more selected to be funded by the consortium’s founding company members.
Over 30 funding recipients presented their proposals to the greater MIT community at a kickoff event on May 13. Anantha P. Chandrakasan, chief innovation and strategy officer and dean of the School of Engineering who is head of the consortium, welcomed the attendees and thanked the consortium’s founding industry members.
“The amazing response to our call for proposals is an incredible testament to the energy and creativity that MGAIC has sparked at MIT. We are especially grateful to our founding members, whose support and vision helped bring this endeavor to life,” adds Chandrakasan. “One of the things that has been most remarkable about MGAIC is that this is a truly cross-Institute initiative. Deans from all five schools and the college collaborated in shaping and implementing it.”
Vivek F. Farias, the Patrick J. McGovern (1959) Professor at the MIT Sloan School of Management and co-faculty director of the consortium with Tim Kraska, associate professor of electrical engineering and computer science in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), emceed the afternoon of five-minute lightning presentations.
Presentation highlights include:
“AI-Driven Tutors and Open Datasets for Early Literacy Education,” presented by Ola Ozernov-Palchik, a research scientist at the McGovern Institute for Brain Research, proposed a refinement for AI-tutors for pK-7 students to potentially decrease literacy disparities.
“Developing jam_bots: Real-Time Collaborative Agents for Live Human-AI Musical Improvisation,” presented by Anna Huang, assistant professor of music and assistant professor of electrical engineering and computer science, and Joe Paradiso, the Alexander W. Dreyfoos (1954) Professor in Media Arts and Sciences at the MIT Media Lab, aims to enhance human-AI musical collaboration in real-time for live concert improvisation.
“GENIUS: GENerative Intelligence for Urban Sustainability,” presented by Norhan Bayomi, a postdoc at the MIT Environmental Solutions Initiative and a research assistant in the Urban Metabolism Group, which aims to address the critical gap of a standardized approach in evaluating and benchmarking cities’ climate policies.
Georgia Perakis, the John C Head III Dean (Interim) of the MIT Sloan School of Management and professor of operations management, operations research, and statistics, who serves as co-chair of the GenAI Dean’s oversight group with Dan Huttenlocher, dean of the MIT Schwarzman College of Computing, ended the event with closing remarks that emphasized “the readiness and eagerness of our community to lead in this space.”
“This is only the beginning,” she continued. “We are at the front edge of a historic moment — one where MIT has the opportunity, and the responsibility, to shape the future of generative AI with purpose, with excellence, and with care.”
© Photo: Jiin Kang
NJ HAX Plasma Forge, a new strategic innovation center, is coming to the Princeton area
Island rivers carve passageways through coral reefs
Volcanic islands, such as the islands of Hawaii and the Caribbean, are surrounded by coral reefs that encircle an island in a labyrinthine, living ring. A coral reef is punctured at points by reef passes — wide channels that cut through the coral and serve as conduits for ocean water and nutrients to filter in and out. These watery passageways provide circulation throughout a reef, helping to maintain the health of corals by flushing out freshwater and transporting key nutrients.
Now, MIT scientists have found that reef passes are shaped by island rivers. In a study appearing today in the journal Geophysical Research Letters, the team shows that the locations of reef passes along coral reefs line up with where rivers funnel out from an island’s coast.
Their findings provide the first quantitative evidence of rivers forming reef passes. Scientists and explorers had speculated that this may be the case: Where a river on a volcanic island meets the coast, the freshwater and sediment it carries flows toward the reef, where a strong enough flow can tunnel into the surrounding coral. This idea has been proposed from time to time but never quantitatively tested, until now.
“The results of this study help us to understand how the health of coral reefs depends on the islands they surround,” says study author Taylor Perron, the Cecil and Ida Green Professor of Earth, Atmospheric and Planetary Sciences at MIT.
“A lot of discussion around rivers and their impact on reefs today has been negative because of human impact and the effects of agricultural practices,” adds lead author Megan Gillen, a graduate student in the MIT-WHOI Joint Program in Oceanography. “This study shows the potential long-term benefits rivers can have on reefs, which I hope reshapes the paradigm and highlights the natural state of rivers interacting with reefs.”
The study’s other co-author is Andrew Ashton of the Woods Hole Oceanographic Institution.
Drawing the lines
The new study is based on the team’s analysis of the Society Islands, a chain of islands in the South Pacific Ocean that includes Tahiti and Bora Bora. Gillen, who joined the MIT-WHOI program in 2020, was interested in exploring connections between coral reefs and the islands they surround. With limited options for on-site work during the Covid-19 pandemic, she and Perron looked to see what they could learn through satellite images and maps of island topography. They did a quick search using Google Earth and zeroed in on the Society Islands for their uniquely visible reef and island features.
“The islands in this chain have these iconic, beautiful reefs, and we kept noticing these reef passes that seemed to align with deeply embayed portions of the coastline,” Gillen says. “We started asking ourselves, is there a correlation here?”
Viewed from above, the coral reefs that circle some islands bear what look to be notches, like cracks that run straight through a ring. These breaks in the coral are reef passes — large channels that run tens of meters deep and can be wide enough for some boats to pass through. On first look, Gillen noticed that the most obvious reef passes seemed to line up with flooded river valleys — depressions in the coastline that have been eroded over time by island rivers that flow toward the ocean. She wondered whether and to what extent island rivers might shape reef passes.
“People have examined the flow through reef passes to understand how ocean waves and seawater circulate in and out of lagoons, but there have been no claims of how these passes are formed,” Gillen says. “Reef pass formation has been mentioned infrequently in the literature, and people haven’t explored it in depth.”
Reefs unraveled
To get a detailed view of the topography in and around the Society Islands, the team used data from the NASA Shuttle Radar Topography Mission — two radar antennae that flew aboard the space shuttle in 1999 and measured the topography across 80 percent of the Earth’s surface.
The researchers used the mission’s topographic data in the Society Islands to create a map of every drainage basin along the coast of each island, to get an idea of where major rivers flow or once flowed. They also marked the locations of every reef pass in the surrounding coral reefs. They then essentially “unraveled” each island’s coastline and reef into a straight line, and compared the locations of basins versus reef passes.
“Looking at the unwrapped shorelines, we find a significant correlation in the spatial relationship between these big river basins and where the passes line up,” Gillen says. “So we can say that statistically, the alignment of reef passes and large rivers does not seem random. The big rivers have a role in forming passes.”
As for how rivers shape the coral conduits, the team has two ideas, which they call, respectively, reef incision and reef encroachment. In reef incision, they propose that reef passes can form in times when the sea level is relatively low, such that the reef is exposed above the sea surface and a river can flow directly over the reef. The water and sediment carried by the river can then erode the coral, progressively carving a path through the reef.
When sea level is relatively higher, the team suspects a reef pass can still form, through reef encroachment. Coral reefs naturally live close to the water surface, where there is light and opportunity for photosynthesis. When sea levels rise, corals naturally grow upward and inward toward an island, to try to “catch up” to the water line.
“Reefs migrate toward the islands as sea levels rise, trying to keep pace with changing average sea level,” Gillen says.
However, part of the encroaching reef can end up in old river channels that were previously carved out by large rivers and that are lower than the rest of the island coastline. The corals in these river beds end up deeper than light can extend into the water column, and inevitably drown, leaving a gap in the form of a reef pass.
“We don’t think it’s an either/or situation,” Gillen says. “Reef incision occurs when sea levels fall, and reef encroachment happens when sea levels rise. Both mechanisms, occurring over dozens of cycles of sea-level rise and island evolution, are likely responsible for the formation and maintenance of reef passes over time.”
The team also looked to see whether there were differences in reef passes in older versus younger islands. They observed that younger islands were surrounded by more reef passes that were spaced closer together, versus older islands that had fewer reef passes that were farther apart.
As islands age, they subside, or sink, into the ocean, which reduces the amount of land that funnels rainwater into rivers. Eventually, rivers are too weak to keep the reef passes open, at which point, the ocean likely takes over, and incoming waves could act to close up some passes.
Gillen is exploring ideas for how rivers, or river-like flow, can be engineered to create paths through coral reefs in ways that would promote circulation and benefit reef health.
“Part of me wonders: If you had a more persistent flow, in places where you don’t naturally have rivers interacting with the reef, could that potentially be a way to increase health, by incorporating that river component back into the reef system?” Gillen says. “That’s something we’re thinking about.”
This research was supported, in part, by the WHOI Watson and Von Damm fellowships.
© Credit: Remi Conte, Tetiaroa Society
MIT engineers uncover a surprising reason why tissues are flexible or rigid
Water makes up around 60 percent of the human body. More than half of this water sloshes around inside the cells that make up organs and tissues. Much of the remaining water flows in the nooks and crannies between cells, much like seawater between grains of sand.
Now, MIT engineers have found that this “intercellular” fluid plays a major role in how tissues respond when squeezed, pressed, or physically deformed. Their findings could help scientists understand how cells, tissues, and organs physically adapt to conditions such as aging, cancer, diabetes, and certain neuromuscular diseases.
In a paper appearing today in Nature Physics, the researchers show that when a tissue is pressed or squeezed, it is more compliant and relaxes more quickly when the fluid between its cells flows easily. When the cells are packed together and there is less room for intercellular flow, the tissue as a whole is stiffer and resists being pressed or squeezed.
The findings challenge conventional wisdom, which has assumed that a tissue’s compliance depends mainly on what’s inside, rather than around, a cell. Now that the researchers have shown that intercellular flow determines how tissues will adapt to physical forces, the results can be applied to understand a wide range of physiological conditions, including how muscles withstand exercise and recover from injury, and how a tissue’s physical adaptability may affect the progression of aging, cancer, and other medical conditions.
The team envisions the results could also inform the design of artificial tissues and organs. For instance, in engineering artificial tissue, scientists might optimize intercellular flow within the tissue to improve its function or resilience. The researchers suspect that intercellular flow could also be a route for delivering nutrients or therapies, either to heal a tissue or eradicate a tumor.
“People know there is a lot of fluid between cells in tissues, but how important that is, in particular in tissue deformation, is completely ignored,” says Ming Guo, associate professor of mechanical engineering at MIT. “Now we really show we can observe this flow. And as the tissue deforms, flow between cells dominates the behavior. So, let’s pay attention to this when we study diseases and engineer tissues.”
Guo is a co-author of the new study, which includes lead author and MIT postdoc Fan Liu PhD ’24, along with Bo Gao and Hui Li of Beijing Normal University, and Liran Lei and Shuainan Liu of Peking Union Medical College.
Pressed and squeezed
The tissues and organs in our body are constantly undergoing physical deformations, from the large stretch and strain of muscles during motion to the small and steady contractions of the heart. In some cases, how easily tissues adapt to deformation can relate to how quickly a person can recover from, for instance, an allergic reaction, a sports injury, or a brain stroke. However, exactly what sets a tissue’s response to deformation is largely unknown.
Guo and his group at MIT looked into the mechanics of tissue deformation, and the role of intercellular flow in particular, following a study they published in 2020. In that study, they focused on tumors and observed the way in which fluid can flow from the center of a tumor out to its edges, through the cracks and crevices between individual tumor cells. They found that when a tumor was squeezed or pressed, the intercellular flow increased, acting as a conveyor belt to transport fluid from the center to the edges. Intercellular flow, they found, could fuel tumor invasion into surrounding regions.
In their new study, the team looked to see what role this intercellular flow might play in other, noncancerous tissues.
“Whether you allow the fluid to flow between cells or not seems to have a major impact,” Guo says. “So we decided to look beyond tumors to see how this flow influences how other tissues respond to deformation.”
A fluid pancake
Guo, Liu, and their colleagues studied the intercellular flow in a variety of biological tissues, including cells derived from pancreatic tissue. They carried out experiments in which they first cultured small clusters of tissue, each measuring less than a quarter of a millimeter wide and numbering tens of thousands of individual cells. They placed each tissue cluster in a custom-designed testing platform that the team built specifically for the study.
“These microtissue samples are in this sweet zone where they are too large to see with atomic force microscopy techniques and too small for bulkier devices,” Guo says. “So, we decided to build a device.”
The researchers adapted a high-precision microbalance that measures minute changes in weight. They combined this with a step motor that is designed to press down on a sample with nanometer precision. The team placed tissue clusters one at a time on the balance and recorded each cluster’s changing weight as it relaxed from a sphere into the shape of a pancake in response to the compression. The team also took videos of the clusters as they were squeezed.
For each type of tissue, the team made clusters of varying sizes. They reasoned that if the tissue’s response is ruled by the flow between cells, then the bigger a tissue, the longer it should take for water to seep through, and therefore, the longer it should take the tissue to relax. It should take the same amount of time, regardless of size, if a tissue’s response is determined by the structure of the tissue rather than fluid.
Over multiple experiments with a variety of tissue types and sizes, the team observed a similar trend: The bigger the cluster, the longer it took to relax, indicating that intercellular flow dominates a tissue’s response to deformation.
“We show that this intercellular flow is a crucial component to be considered in the fundamental understanding of tissue mechanics and also applications in engineering living systems,” Liu says.
Going forward, the team plans to look into how intercellular flow influences brain function, particularly in disorders such as Alzheimer’s disease.
“Intercellular or interstitial flow can help you remove waste and deliver nutrients to the brain,” Liu adds. “Enhancing this flow in some cases might be a good thing.”
“As this work shows, as we apply pressure to a tissue, fluid will flow,” Guo says. “In the future, we can think of designing ways to massage a tissue to allow fluid to transport nutrients between cells.”
© Image: Courtesy of the researchers
A building material that lives and stores carbon
“Cold spray” 3D printing technique proves effective for on-site bridge repair
More than half of the nation’s 623,218 bridges are experiencing significant deterioration. Through an in-field case study conducted in western Massachusetts, a team led by the University of Massachusetts at Amherst in collaboration with researchers from the MIT Department of Mechanical Engineering (MechE) has just successfully demonstrated that 3D printing may provide a cost-effective, minimally disruptive solution.
“Anytime you drive, you go under or over a corroded bridge,” says Simos Gerasimidis, associate professor of civil and environmental engineering at UMass Amherst and former visiting professor in the Department of Civil and Environmental Engineering at MIT, in a press release. “They are everywhere. It’s impossible to avoid, and their condition often shows significant deterioration. We know the numbers.”
The numbers, according to the American Society of Civil Engineers’ 2025 Report Card for America’s Infrastructure, are staggering: Across the United States, 49.1 percent of the nation’s 623,218 bridges are in “fair” condition and 6.8 percent are in “poor” condition. The projected cost to restore all of these failing bridges exceeds $191 billion.
A proof-of-concept repair took place last month on a small, corroded section of a bridge in Great Barrington, Massachusetts. The technique, called cold spray, can extend the life of beams, reinforcing them with newly deposited steel. The process accelerates particles of powdered steel in heated, compressed gas, and then a technician uses an applicator to spray the steel onto the beam. Repeated sprays create multiple layers, restoring thickness and other structural properties.
This method has proven to be an effective solution for other large structures like submarines, airplanes, and ships, but bridges present a problem on a greater scale. Unlike movable vessels, stationary bridges cannot be brought to the 3D printer — the printer must be brought on-site — and, to lessen systemic impacts, repairs must also be made with minimal disruptions to traffic, which the new approach allows.
“Now that we’ve completed this proof-of-concept repair, we see a clear path to a solution that is much faster, less costly, easier, and less invasive,” says Gerasimidis. “To our knowledge, this is a first. Of course, there is some R&D that needs to be developed, but this is a huge milestone to that.”
“This is a tremendous collaboration where cutting-edge technology is brought to address a critical need for infrastructure in the commonwealth and across the United States,” says John Hart, Class of 1922 Professor and head of the Department of MechE at MIT. Hart and Haden Quinlan, senior program manager in the Center for Advanced Production Technologies at MIT, are leading MIT’s efforts in in the project. Hart is also faculty co-lead of the recently announced MIT Initiative for New Manufacturing.
“Integrating digital systems with advanced physical processing is the future of infrastructure,” says Quinlan. “We’re excited to have moved this technology beyond the lab and into the field, and grateful to our collaborators in making this work possible.”
UMass says the Massachusetts Department of Transportation (MassDOT) has been a valued research partner, helping to identify the problem and providing essential support for the development and demonstration of the technology. Technical guidance and funding support were provided by the MassDOT Highway Division and the Research and Technology Transfer Program.
Equipment for this project was supported through the Massachusetts Manufacturing Innovation Initiative, a statewide program led by the Massachusetts Technology Collaborative (MassTech)’s Center for Advanced Manufacturing that helps bridge the gap between innovation and commercialization in hard tech manufacturing.
“It’s a very Massachusetts success story,” Gerasimidis says. “It involves MassDOT being open-minded to new ideas. It involves UMass and MIT putting [together] the brains to do it. It involves MassTech to bring manufacturing back to Massachusetts. So, I think it’s a win-win for everyone involved here.”
The bridge in Great Barrington is scheduled for demolition in a few years. After demolition occurs, the recently-sprayed beams will be taken back to UMass for testing and measurement to study how well the deposited steel powder adhered to the structure in the field compared to in a controlled lab setting, if it corroded further after it was sprayed, and determine its mechanical properties.
This demonstration builds on several years of research by the UMass and MIT teams, including development of a “digital thread” approach to scan corroded beam surfaces and determine material deposition profiles, alongside laboratory studies of cold spray and other additive manufacturing approaches that are suited to field deployment.
Altogether, this work is a collaborative effort among UMass Amherst, MIT MechE, MassDOT, the Massachusetts Technology Collaborative (MassTech), the U.S. Department of Transportation, and the Federal Highway Administration. Research reports are available on the MassDOT website.
© Photo: Alexia Cota/UMass Amherst
When Earth iced over, early life may have sheltered in meltwater ponds
When the Earth froze over, where did life shelter? MIT scientists say one refuge may have been pools of melted ice that dotted the planet’s icy surface.
In a study appearing today in Nature Communications, the researchers report that 635 million to 720 million years ago, during periods known as “Snowball Earth,” when much of the planet was covered in ice, some of our ancient cellular ancestors could have waited things out in meltwater ponds.
The scientists found that eukaryotes — complex cellular lifeforms that eventually evolved into the diverse multicellular life we see today — could have survived the global freeze by living in shallow pools of water. These small, watery oases may have persisted atop relatively shallow ice sheets present in equatorial regions. There, the ice surface could accumulate dark-colored dust and debris from below, which enhanced its ability to melt into pools. At temperatures hovering around 0 degrees Celsius, the resulting meltwater ponds could have served as habitable environments for certain forms of early complex life.
The team drew its conclusions based on an analysis of modern-day meltwater ponds. Today in Antarctica, small pools of melted ice can be found along the margins of ice sheets. The conditions along these polar ice sheets are similar to what likely existed along ice sheets near the equator during Snowball Earth.
The researchers analyzed samples from a variety of meltwater ponds located on the McMurdo Ice Shelf in an area that was first described by members of Robert Falcon Scott's 1903 expedition as “dirty ice.” The MIT researchers discovered clear signatures of eukaryotic life in every pond. The communities of eukaryotes varied from pond to pond, revealing a surprising diversity of life across the setting. The team also found that salinity plays a key role in the kind of life a pond can host: Ponds that were more brackish or salty had more similar eukaryotic communities, which differed from those in ponds with fresher waters.
“We’ve shown that meltwater ponds are valid candidates for where early eukaryotes could have sheltered during these planet-wide glaciation events,” says lead author Fatima Husain, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “This shows us that diversity is present and possible in these sorts of settings. It’s really a story of life’s resilience.”
The study’s MIT co-authors include Schlumberger Professor of Geobiology Roger Summons and former postdoc Thomas Evans, along with Jasmin Millar of Cardiff University, Anne Jungblut at the Natural History Museum in London, and Ian Hawes of the University of Waikato in New Zealand.
Polar plunge
“Snowball Earth” is the colloquial term for periods of time in Earth history during which the planet iced over. It is often used as a reference to the two consecutive, multi-million-year glaciation events which took place during the Cryogenian Period, which geologists refer to as the time between 635 and 720 million years ago. Whether the Earth was more of a hardened snowball or a softer “slushball” is still up for debate. But scientists are certain of one thing: Most of the planet was plunged into a deep freeze, with average global temperatures of minus 50 degrees Celsius. The question has been: How and where did life survive?
“We’re interested in understanding the foundations of complex life on Earth. We see evidence for eukaryotes before and after the Cryogenian in the fossil record, but we largely lack direct evidence of where they may have lived during,” Husain says. “The great part of this mystery is, we know life survived. We’re just trying to understand how and where.”
There are a number of ideas for where organisms could have sheltered during Snowball Earth, including in certain patches of the open ocean (if such environments existed), in and around deep-sea hydrothermal vents, and under ice sheets. In considering meltwater ponds, Husain and her colleagues pursued the hypothesis that surface ice meltwaters may also have been capable of supporting early eukaryotic life at the time.
“There are many hypotheses for where life could have survived and sheltered during the Cryogenian, but we don’t have excellent analogs for all of them,” Husain notes. “Above-ice meltwater ponds occur on Earth today and are accessible, giving us the opportunity to really focus in on the eukaryotes which live in these environments.”
Small pond, big life
For their new study, the researchers analyzed samples taken from meltwater ponds in Antarctica. In 2018, Summons and colleagues from New Zealand traveled to a region of the McMurdo Ice Shelf in East Antarctica, known to host small ponds of melted ice, each just a few feet deep and a few meters wide. There, water freezes all the way to the seafloor, in the process trapping dark-colored sediments and marine organisms. Wind-driven loss of ice from the surface creates a sort of conveyer belt that brings this trapped debris to the surface over time, where it absorbs the sun’s warmth, causing ice to melt, while surrounding debris-free ice reflects incoming sunlight, resulting in the formation of shallow meltwater ponds.
The bottom of each pond is lined with mats of microbes that have built up over years to form layers of sticky cellular communities.
“These mats can be a few centimeters thick, colorful, and they can be very clearly layered,” Husain says.
These microbial mats are made up of cyanobacteria, prokaryotic, single-celled photosynthetic organisms that lack a cell nucleus or other organelles. While these ancient microbes are known to survive within some of the the harshest environments on Earth including meltwater ponds, the researchers wanted to know whether eukaryotes — complex organisms that evolved a cell nucleus and other membrane bound organelles — could also weather similarly challenging circumstances. Answering this question would take more than a microscope, as the defining characteristics of the microscopic eukaryotes present among the microbial mats are too subtle to distinguish by eye.
To characterize the eukaryotes, the team analyzed the mats for specific lipids they make called sterols, as well as genetic components called ribosomal ribonucleic acid (rRNA), both of which can be used to identify organisms with varying degrees of specificity. These two independent sets of analyses provided complementary fingerprints for certain eukaryotic groups. As part of the team’s lipid research, they found many sterols and rRNA genes closely associated with specific types of algae, protists, and microscopic animals among the microbial mats. The researchers were able to assess the types and relative abundance of lipids and rRNA genes from pond to pond, and found the ponds hosted a surprising diversity of eukaryotic life.
“No two ponds were alike,” Husain says. “There are repeating casts of characters, but they’re present in different abundances. And we found diverse assemblages of eukaryotes from all the major groups in all the ponds studied. These eukaryotes are the descendants of the eukaryotes that survived the Snowball Earth. This really highlights that meltwater ponds during Snowball Earth could have served as above-ice oases that nurtured the eukaryotic life that enabled the diversification and proliferation of complex life — including us — later on.”
This research was supported, in part, by the NASA Exobiology Program, the Simons Collaboration on the Origins of Life, and a MISTI grant from MIT-New Zealand.
© Credit: Roger Summons
QS rankings: ETH Zurich secures 7th place once again
Supercharged vaccine could offer strong protection with just one dose
Researchers at MIT and the Scripps Research Institute have shown that they can generate a strong immune response to HIV with just one vaccine dose, by adding two powerful adjuvants — materials that help stimulate the immune system.
In a study of mice, the researchers showed that this approach produced a much wider diversity of antibodies against an HIV antigen, compared to the vaccine given on its own or with just one of the adjuvants. The dual-adjuvant vaccine accumulated in the lymph nodes and remained there for up to a month, allowing the immune system to build up a much greater number of antibodies against the HIV protein.
This strategy could lead to the development of vaccines that only need to be given once, for infectious diseases including HIV or SARS-CoV-2, the researchers say.
“This approach is compatible with many protein-based vaccines, so it offers the opportunity to engineer new formulations for these types of vaccines across a wide range of different diseases, such as influenza, SARS-CoV-2, or other pandemic outbreaks,” says J. Christopher Love, the Raymond A. and Helen E. St. Laurent Professor of Chemical Engineering at MIT, and a member of the Koch Institute for Integrative Cancer Research and the Ragon Institute of MGH, MIT, and Harvard.
Love and Darrell Irvine, a professor of immunology and microbiology at the Scripps Research Institute, are the senior authors of the study, which appears today in Science Translational Medicine. Kristen Rodrigues PhD ’23 and Yiming Zhang PhD ’25 are the lead authors of the paper.
More powerful vaccines
Most vaccines are delivered along with adjuvants, which help to stimulate a stronger immune response to the antigen. One adjuvant commonly used with protein-based vaccines, including those for hepatitis A and B, is aluminum hydroxide, also known as alum. This adjuvant works by activating the innate immune response, helping the body to form a stronger memory of the vaccine antigen.
Several years ago, Irvine developed another adjuvant based on saponin, an FDA-approved adjuvant derived from the bark of the Chilean soapbark tree. His work showed that nanoparticles containing both saponin and a molecule called MPLA, which promotes inflammation, worked better than saponin on its own. That nanoparticle, known as SMNP, is now being used as an adjuvant for an HIV vaccine that is currently in clinical trials.
Irvine and Love then tried combining alum and SMNP and showed that vaccines containing both of those adjuvants could generate even more powerful immune responses against either HIV or SARS-CoV-2.
In the new paper, the researchers wanted to explore why these two adjuvants work so well together to boost the immune response, specifically the B cell response. B cells produce antibodies that can circulate in the bloodstream and recognize a pathogen if the body is exposed to it again.
For this study, the researchers used an HIV protein called MD39 as their vaccine antigen, and anchored dozens of these proteins to each alum particle, along with SMNP.
After vaccinating mice with these particles, the researchers found that the vaccine accumulated in the lymph nodes — structures where B cells encounter antigens and undergo rapid mutations that generate antibodies with high affinity for a particular antigen. This process takes place within clusters of cells known as germinal centers.
The researchers showed that SMNP and alum helped the HIV antigen to penetrate through the protective layer of cells surrounding the lymph nodes without being broken down into fragments. The adjuvants also helped the antigens to remain intact in the lymph nodes for up to 28 days.
“As a result, the B cells that are cycling in the lymph nodes are constantly being exposed to the antigen over that time period, and they get the chance to refine their solution to the antigen,” Love says.
This approach may mimic what occurs during a natural infection, when antigens can remain in the lymph nodes for weeks, giving the body time to build up an immune response.
Antibody diversity
Single-cell RNA sequencing of B cells from the vaccinated mice revealed that the vaccine containing both adjuvants generated a much more diverse repertoire of B cells and antibodies. Mice that received the dual-adjuvant vaccine produced two to three times more unique B cells than mice that received just one of the adjuvants.
That increase in B cell number and diversity boosts the chances that the vaccine could generate broadly neutralizing antibodies — antibodies that can recognize a variety of strains of a given virus, such as HIV.
“When you think about the immune system sampling all of the possible solutions, the more chances we give it to identify an effective solution, the better,” Love says. “Generating broadly neutralizing antibodies is something that likely requires both the kind of approach that we showed here, to get that strong and diversified response, as well as antigen design to get the right part of the immunogen shown.”
Using these two adjuvants together could also contribute to the development of more potent vaccines against other infectious diseases, with just a single dose.
“What’s potentially powerful about this approach is that you can achieve long-term exposures based on a combination of adjuvants that are already reasonably well-understood, so it doesn’t require a different technology. It’s just combining features of these adjuvants to enable low-dose or potentially even single-dose treatments,” Love says.
The research was funded by the National Institutes of Health; the Koch Institute Support (core) Grant from the National Cancer Institute; the Ragon Institute of MGH, MIT, and Harvard; and the Howard Hughes Medical Institute.
© Image: Courtesy of the researchers
Specialists for sustainable industry in sub-Saharan Africa
Senior Thesis Spotlight: Considering facets of human psychology from a data science point of view
Princeton undergraduate Alison Fortenberry awarded Beinecke Scholarship
New 3D chips could make electronics faster and more energy-efficient
The advanced semiconductor material gallium nitride will likely be key for the next generation of high-speed communication systems and the power electronics needed for state-of-the-art data centers.
Unfortunately, the high cost of gallium nitride (GaN) and the specialization required to incorporate this semiconductor material into conventional electronics have limited its use in commercial applications.
Now, researchers from MIT and elsewhere have developed a new fabrication process that integrates high-performance GaN transistors onto standard silicon CMOS chips in a way that is low-cost and scalable, and compatible with existing semiconductor foundries.
Their method involves building many tiny transistors on the surface of a GaN chip, cutting out each individual transistor, and then bonding just the necessary number of transistors onto a silicon chip using a low-temperature process that preserves the functionality of both materials.
The cost remains minimal since only a tiny amount of GaN material is added to the chip, but the resulting device can receive a significant performance boost from compact, high-speed transistors. In addition, by separating the GaN circuit into discrete transistors that can be spread over the silicon chip, the new technology is able to reduce the temperature of the overall system.
The researchers used this process to fabricate a power amplifier, an essential component in mobile phones, that achieves higher signal strength and efficiencies than devices with silicon transistors. In a smartphone, this could improve call quality, boost wireless bandwidth, enhance connectivity, and extend battery life.
Because their method fits into standard procedures, it could improve electronics that exist today as well as future technologies. Down the road, the new integration scheme could even enable quantum applications, as GaN performs better than silicon at the cryogenic temperatures essential for many types of quantum computing.
“If we can bring the cost down, improve the scalability, and, at the same time, enhance the performance of the electronic device, it is a no-brainer that we should adopt this technology. We’ve combined the best of what exists in silicon with the best possible gallium nitride electronics. These hybrid chips can revolutionize many commercial markets,” says Pradyot Yadav, an MIT graduate student and lead author of a paper on this method.
He is joined on the paper by fellow MIT graduate students Jinchen Wang and Patrick Darmawi-Iskandar; MIT postdoc John Niroula; senior authors Ulrich L. Rohde, a visiting scientist at the Microsystems Technology Laboratories (MTL), and Ruonan Han, an associate professor in the Department of Electrical Engineering and Computer Science (EECS) and member of MTL; and Tomás Palacios, the Clarence J. LeBel Professor of EECS, and director of MTL; as well as collaborators at Georgia Tech and the Air Force Research Laboratory. The research was recently presented at the IEEE Radio Frequency Integrated Circuits Symposium.
Swapping transistors
Gallium nitride is the second most widely used semiconductor in the world, just after silicon, and its unique properties make it ideal for applications such as lighting, radar systems and power electronics.
The material has been around for decades and, to get access to its maximum performance, it is important for chips made of GaN to be connected to digital chips made of silicon, also called CMOS chips. To enable this, some integration methods bond GaN transistors onto a CMOS chip by soldering the connections, but this limits how small the GaN transistors can be. The tinier the transistors, the higher the frequency at which they can work.
Other methods integrate an entire gallium nitride wafer on top of a silicon wafer, but using so much material is extremely costly, especially since the GaN is only needed in a few tiny transistors. The rest of the material in the GaN wafer is wasted.
“We wanted to combine the functionality of GaN with the power of digital chips made of silicon, but without having to compromise on either cost of bandwidth. We achieved that by adding super-tiny discrete gallium nitride transistors right on top of the silicon chip,” Yadav explains.
The new chips are the result of a multistep process.
First, a tightly packed collection of miniscule transistors is fabricated across the entire surface of a GaN wafer. Using very fine laser technology, they cut each one down to just the size of the transistor, which is 240 by 410 microns, forming what they call a dielet. (A micron is one millionth of a meter.)
Each transistor is fabricated with tiny copper pillars on top, which they use to bond directly to the copper pillars on the surface of a standard silicon CMOS chip. Copper to copper bonding can be done at temperatures below 400 degrees Celsius, which is low enough to avoid damaging either material.
Current GaN integration techniques require bonds that utilize gold, an expensive material that needs much higher temperatures and stronger bonding forces than copper. Since gold can contaminate the tools used in most semiconductor foundries, it typically requires specialized facilities.
“We wanted a process that was low-cost, low-temperature, and low-force, and copper wins on all of those related to gold. At the same time, it has better conductivity,” Yadav says.
A new tool
To enable the integration process, they created a specialized new tool that can carefully integrate the extremely tiny GaN transistor with the silicon chips. The tool uses a vacuum to hold the dielet as it moves on top of a silicon chip, zeroing in on the copper bonding interface with nanometer precision.
They used advanced microscopy to monitor the interface, and then when the dielet is in the right position, they apply heat and pressure to bond the GaN transistor to the chip.
“For each step in the process, I had to find a new collaborator who knew how to do the technique that I needed, learn from them, and then integrate that into my platform. It was two years of constant learning,” Yadav says.
Once the researchers had perfected the fabrication process, they demonstrated it by developing power amplifiers, which are radio frequency circuits that boost wireless signals.
Their devices achieved higher bandwidth and better gain than devices made with traditional silicon transistors. Each compact chip has an area of less than half a square millimeter.
In addition, because the silicon chip they used in their demonstration is based on Intel 16 22nm FinFET state-of-the-art metallization and passive options, they were able to incorporate components often used in silicon circuits, such as neutralization capacitors. This significantly improved the gain of the amplifier, bringing it one step closer to enabling the next generation of wireless technologies.
“To address the slowdown of Moore’s Law in transistor scaling, heterogeneous integration has emerged as a promising solution for continued system scaling, reduced form factor, improved power efficiency, and cost optimization. Particularly in wireless technology, the tight integration of compound semiconductors with silicon-based wafers is critical to realizing unified systems of front-end integrated circuits, baseband processors, accelerators, and memory for next-generation antennas-to-AI platforms. This work makes a significant advancement by demonstrating 3D integration of multiple GaN chips with silicon CMOS and pushes the boundaries of current technological capabilities,” says Atom Watanabe, a research scientist at IBM who was not involved with this paper.
This work is supported, in part, by the U.S. Department of Defense through the National Defense Science and Engineering Graduate (NDSEG) Fellowship Program and CHIMES, one of the seven centers in JUMP 2.0, a Semiconductor Research Corporation Program by the Department of Defense and the Defense Advanced Research Projects Agency (DARPA). Fabrication was carried out using facilities at MIT.Nano, the Air Force Research Laboratory, and Georgia Tech.
© Image: Courtesy of the researchers
Cuts imperil ‘keys to future health’
Cuts imperil ‘keys to future health’

Nicole Romero removes biological samples from a freezer at the Chan School of Public Health.
Photos by Veasey Conway/Harvard Staff Photographer
Anna Lamb
Harvard Staff Writer
Chan School scrambles to protect living legacy of landmark Nurses’ Health studies
We all know that smoking is a killer. Postmenopausal women are told by their doctors to maintain a healthy weight to reduce their risk of breast cancer. Trans fats have mostly disappeared from our diets.
These groundbreaking interventions are rooted in the Nurses’ Health Studies, which have tracked data on lives and lifestyles — and taken biological samples — from thousands of participating nurses for decades. Now, federal research funding cuts are putting these efforts in jeopardy.
The studies’ biological samples are stored in a network of high-powered freezers, which must maintain temperatures as low as 170 degrees below Celsius. The freezers are filled with liquid nitrogen and maintained by a small staff of research assistants and managers at the Harvard T.H. Chan School of Public Health and Brigham and Women’s Hospital. Grants to operate the biorepository have been terminated.
“These women have given all they can for us,” said biobank manager Janine Neville-Golden of the volunteers whose blood and tissue are stored in the repository and who have given their time over years to answer questionnaires and undergo observation during life changes such as illness and pregnancy. “We’ve got to protect these right now. They’re keys to future health.”
The Nurses’ Health Study, which was started in 1976 and continued through a second cohort, Nurses’ Health Study II, in the late ’80s, has contributed to breakthroughs in diet research, cancer research, and the understanding of hormones in women’s health. The Nurses’ Health Study 3, launched in 2010, includes different types of health workers and, for the first time, male nurses.
“This is a unique, irreplaceable resource, in many regards. There are other biobanks, but this one is unique in scale and associated data.”
Jorge Chavarro
In 2010, Chan School researchers Jorge Chavarro, Walter Willett, Janet Rich-Edwards, and Stacey Missmer launched the third study in collaboration with investigators at the Channing Division of Network Medicine at Brigham and Women’s Hospital and Harvard Medical School. This initiative was started in conjunction with the Growing Up Today Study (GUTS), which recruits children of Nurses’ Health II participants and has sought to expand on insights gathered through the previous studies.
In light of the funding cuts, collection of samples for both studies has ceased.
“This is a unique, irreplaceable resource, in many regards,” said Chavarro, principal investigator for both Nurses Health III and GUTS. “There are other biobanks, but this one is unique in scale and associated data.”
Chavarro said that while biological samples are relatively easy to capture, “What makes a biobank useful is the fact that you’re able to connect information from those samples to information about the health of people.”
Many of the participants from Nurses Health I and II — who number in the hundreds of thousands — have given multiple biological samples, including urine, cheek swabs, and blood.
“These samples were first collected when people were young and healthy and contain decades worth of information about people’s lifestyles, and follow-ups,” Chavarro said.

He noted that the Chan School is working hard to find funding to replace the lost federal grants. But without a long-term solution, the essential liquid nitrogen cannot be purchased, and millions of samples will degrade.
“If there’s not a sustainable mechanism to continue paying for the ongoing operations of the biorepositories, we’re going to lose samples,” he said. “It’s probably not going to be this week, but it is not something that can wait forever.”
Chavarro said that there is also a real risk of losing the team that makes specimen research like his possible.
“Operating a biorepository is not just putting samples in a freezer. It requires a lot of specific technical expertise,” he said. “You need to know, how do you store samples under what conditions and you need somebody if there’s a freezer failure — you need people who know how to respond.”
Neville-Golden, who has been with the biorepository for 16 years, manages a skeleton crew responsible for maintaining and pulling samples for research. If something goes wrong with a freezer in the middle of the night, she’s the one who’s called. For her, the project is very much a human endeavor.
“I’ve learned a lot,” she said, from freezer maintenance to the lofty goals of the researchers she works with to test hypotheses against the samples. “It’s not about the money, it’s about service and the greater good.”
Neville-Golden said that her team — a handful of research assistants and a dedicated project manager, Nicole Romero — gets somewhere in the neighborhood of 200 external requests per year to use data from the cohorts.
“There’s a long waiting line to get access to these samples, just because there’s not enough person-power to be pulling out the samples that people want,” she said.
Their team hand-picks samples out of tens of thousands stored in each freezer, thaws them, and makes them research-ready. It’s hard work, but Neville-Golden said she tries to keep in mind the people who gave the samples, and what they hoped when giving of themselves.
“A couple of them we’ve talked to over the years said when we are at work and we see people who are that ill and going through everything that they’re going through, we want to do whatever we can do to stop that, to make things better, to eliminate the pain, the suffering, that kind of thing,” she said. “So it really has been a labor of love.”
Combining technology, education, and human connection to improve online learning
MIT Morningside Academy for Design (MAD) Fellow Caitlin Morris is an architect, artist, researcher, and educator who has studied psychology and used online learning tools to teach herself coding and other skills. She’s a soft-spoken observer, with a keen interest in how people use space and respond to their environments. Combining her observational skills with active community engagement, she works at the intersection of technology, education, and human connection to improve digital learning platforms.
Morris grew up in rural upstate New York in a family of makers. She learned to sew, cook, and build things with wood at a young age. One of her earlier memories is of a small handsaw she made — with the help of her father, a professional carpenter. It had wooden handles on both sides to make sawing easier for her.
Later, when she needed to learn something, she’d turn to project-based communities, rather than books. She taught herself to code late at night, taking advantage of community-oriented platforms where people answer questions and post sketches, allowing her to see the code behind the objects people made.
“For me, that was this huge, wake-up moment of feeling like there was a path to expression that was not a traditional computer-science classroom,” she says. “I think that’s partly why I feel so passionate about what I’m doing now. That was the big transformation: having that community available in this really personal, project-based way.”
Subsequently, Morris has become involved in community-based learning in diverse ways: She’s a co-organizer of the MIT Media Lab’s Festival of Learning; she leads creative coding community meetups; and she’s been active in the open-source software community development.
“My years of organizing learning and making communities — both in person and online — have shown me firsthand how powerful social interaction can be for motivation and curiosity,” Morris said. “My research is really about identifying which elements of that social magic are most essential, so we can design digital environments that better support those dynamics.”
Even in her artwork, Morris sometimes works with a collective. She’s contributed to the creation of about 10 large art installations that combine movement, sound, imagery, lighting, and other technologies to immerse the visitor in an experience evoking some aspect of nature, such as flowing water, birds in flight, or crowd kinetics. These marvelous installations are commanding and calming at the same time, possibly because they focus the mind, eye, and sometimes the ear.
She did much of this work with New York-based Hypersonic, a company of artists and technologists specializing in large kinetic installations in public spaces. Before that, she earned a BS in psychology and a BS in architectural building sciences from Rensselaer Polytechnic Institute, then an MFA in design and technology from the Parsons School of Design at The New School.
During, in between, after, and sometimes concurrently, she taught design, coding, and other technologies at the high school, undergraduate, and graduate-student levels.
“I think what kind of got me hooked on teaching was that the way I learned as a child was not the same as in the classroom,” Morris explains. “And I later saw this in many of my students. I got the feeling that the normal way of learning things was not working for them. And they thought it was their fault. They just didn’t really feel welcome within the traditional education model.”
Morris says that when she worked with those students, tossing aside tradition and instead saying — “You know, we’re just going to do this animation. Or we’re going to make this design or this website or these graphics, and we’re going to approach it in this totally different way” — she saw people “kind of unlock and be like, ‘Oh my gosh. I never thought I could do that.’
“For me, that was the hook, that’s the magic of it. Because I was coming from that experience of having to figure out those unlock mechanisms for myself, it was really exciting to be able to share them with other people, those unlock moments.”
For her doctoral work with the MIT Media Lab’s Fluid Interfaces Group, she’s focusing on the personal space and emotional gaps associated with learning, particularly online and AI-assisted learning. This research builds on her experience increasing human connection in both physical and virtual learning environments.
“I’m developing a framework that combines AI-driven behavioral analysis with human expert assessment to study social learning dynamics,” she says. “My research investigates how social interaction patterns influence curiosity development and intrinsic motivation in learning, with particular focus on understanding how these dynamics differ between real peers and AI-supported environments.”
The first step in her research is determining which elements of social interaction are not replaceable by an AI-based digital tutor. Following that assessment, her goal is to build a prototype platform for experiential learning.
“I’m creating tools that can simultaneously track observable behaviors — like physical actions, language cues, and interaction patterns — while capturing learners’ subjective experiences through reflection and interviews,” Morris explains. “This approach helps connect what people do with how they feel about their learning experience.
“I aim to make two primary contributions: first, analysis tools for studying social learning dynamics; and second, prototype tools that demonstrate practical approaches for supporting social curiosity in digital learning environments. These contributions could help bridge the gap between the efficiency of digital platforms and the rich social interaction that occurs in effective in-person learning.”
Her goals make Morris a perfect fit for the MIT MAD Fellowship. One statement in MAD’s mission is: “Breaking away from traditional education, we foster creativity, critical thinking, making, and collaboration, exploring a range of dynamic approaches to prepare students for complex, real-world challenges.”
Morris wants to help community organizations deal with the rapid AI-powered changes in education, once she finishes her doctorate in 2026. “What should we do with this ‘physical space versus virtual space’ divide?” she asks. That is the space currently captivating Morris’s thoughts.
© Photo: Adélaïde Zollinger
A sounding board for strengthening the student experience
During his first year at MIT in 2021, Matthew Caren ’25 received an intriguing email inviting students to apply to become members of the MIT Schwarzman College of Computing’s (SCC) Undergraduate Advisory Group (UAG). He immediately shot off an application.
Caren is a jazz musician who majored in computer science and engineering, and minored in music and theater arts. He was drawn to the college because of its focus on the applied intersections between computing, engineering, the arts, and other academic pursuits. Caren eagerly joined the UAG and stayed on it all four years at MIT.
First formed in April 2020, the group brings together a committee of around 25 undergraduate students representing a broad swath of both traditional and blended majors in electrical engineering and computer science (EECS) and other computing-related programs. They advise the college’s leadership on issues, offer constructive feedback, and serve as a sounding board for innovative new ideas.
“The ethos of the UAG is the ethos of the college itself,” Caren explains. “If you very intentionally bring together a bunch of smart, interesting, fun-to-be-around people who are all interested in completely diverse things, you'll get some really cool discussions and interactions out of it.”
Along the way, he’s also made “dear” friends and found true colleagues. In the group’s monthly meetings with SCC Dean Dan Huttenlocher and Deputy Dean Asu Ozdaglar, who is also the department head of EECS, UAG members speak openly about challenges in the student experience and offer recommendations to guests from across the Institute, such as faculty who are developing new courses and looking for student input.
“This group is unique in the sense that it’s a direct line of communication to the college’s leadership,” says Caren. “They make time in their insanely busy schedules for us to explain where the holes are, and what students’ needs are, directly from our experiences.”
“The students in the group are keenly interested in computer science and AI, especially how these fields connect with other disciplines. They’re also passionate about MIT and eager to enhance the undergraduate experience. Hearing their perspective is refreshing — their honesty and feedback have been incredibly helpful to me as dean,” says Huttenlocher.
“Meeting with the students each month is a real pleasure. The UAG has been an invaluable space for understanding the student experience more deeply. They engage with computing in diverse ways across MIT, so their input on the curriculum and broader college issues has been insightful,” Ozdaglar says.
UAG program manager Ellen Rushman says that “Asu and Dan have done an amazing job cultivating a space in which students feel safe bringing up things that aren’t positive all the time.” The group’s suggestions are frequently implemented, too.
For example, in 2021, Skidmore, Owings & Merrill, the architects designing the new SCC building, presented their renderings at a UAG meeting to request student feedback. Their original interiors layout offered very few of the hybrid study and meeting booths that are so popular in today’s first floor lobby.
Hearing strong UAG opinions about the sort of open-plan, community-building spaces that students really valued was one of the things that created the change to the current floor plan. “It’s super cool walking into the personalized space and seeing it constantly being in use and always crowded. I actually feel happy when I can’t get a table,” says Caren, who has just ended his tenure as co-chair of the group in preparation for graduation.
Caren’s co-chair, rising senior Julia Schneider, who is double-majoring in artificial intelligence and decision-making and mathematics, joined the UAG as a first-year to understand more about the college’s mission of fostering interdepartmental collaborations.
“Since I am a student in electrical engineering and computer science, but I conduct research in mechanical engineering on robotics, the college’s mission of fostering interdepartmental collaborations and uniting them through computing really spoke to my personal experiences in my first year at MIT,” Schneider says.
During her time on the UAG, members have joined subgroups focused around achieving different programmatic goals of the college, such as curating a public lecture series for the 2025-26 academic year to give MIT students exposure to faculty who conduct research in other disciplines that relate to computing.
At one meeting, after hearing how challenging it is for students to understand all the possible courses to take during their tenure, Schneider and some UAG peers formed a subgroup to find a solution.
The students agreed that some of the best courses they’ve taken at MIT, or pairings of courses that really struck a chord with their interdisciplinary interests, came because they spoke to upperclassmen and got recommendations. “This kind of tribal knowledge doesn’t really permeate to all of MIT,” Schneider explains.
For the last six months, Schneider and the subgroup have been working on a course visualization website, NerdXing, which came out of these discussions.
Guided by Rob Miller, Distinguished Professor of Computer Science in EECS, the subgroup used a dataset of EECS course enrollments over the past decade to develop a different type of tool than MIT students typically use, such as CourseRoad and others.
Miller, who regularly attends the UAG meetings in his role as the education officer for the college’s cross-cutting initiative, Common Ground for Computing Education, comments, “the really cool idea here is to help students find paths that were taken by other people who are like them — not just interested in computer science, but maybe also in biology, or music, or economics, or neuroscience. It's very much in the spirit of the College of Computing — applying data-driven computational methods, in support of students with wide-ranging computational interests.”
Opening the NerdXing pilot, Schneider gave a demo. She explains that if you are a computer science (CS) major and would like to create a visual presenting potential courses for you, after you select your major and a class of interest, you can expand a huge graph presenting all the possible courses your CS peers have taken over the past decade.
She clicked on class 18.404 (Theory of Computation) as the starting class of interest, which led to class 6.7900 (Machine Learning), and then unexpectedly to 21M.302 (Harmony and Counterpoint II), an advanced music class.
“You start to see aggregate statistics that tell you how many students took each course, and you can further pare it down to see the most popular courses in CS or follow lines of red dots between courses to see the typical sequence of classes taken.”
By getting granular on the graph, users begin to see classes that they have probably never heard anyone talking about in their program. “I think that one of the reasons you come to MIT is to be able to take cool stuff exactly like this,” says Schneider.
The tool aims to show students how they can choose classes that go far beyond just filling degree requirements. It’s just one example of how UAG is empowering students to strengthen the college and the experiences it offers them.
“We are MIT students. We have the skills to build solutions,” Schneider says. “This group of people not only brings up ways in which things could be better, but we take it into our own hands to fix things.”
© Photo: Eric Fletcher
Unpacking the bias of large language models
Research has shown that large language models (LLMs) tend to overemphasize information at the beginning and end of a document or conversation, while neglecting the middle.
This “position bias” means that, if a lawyer is using an LLM-powered virtual assistant to retrieve a certain phrase in a 30-page affidavit, the LLM is more likely to find the right text if it is on the initial or final pages.
MIT researchers have discovered the mechanism behind this phenomenon.
They created a theoretical framework to study how information flows through the machine-learning architecture that forms the backbone of LLMs. They found that certain design choices which control how the model processes input data can cause position bias.
Their experiments revealed that model architectures, particularly those affecting how information is spread across input words within the model, can give rise to or intensify position bias, and that training data also contribute to the problem.
In addition to pinpointing the origins of position bias, their framework can be used to diagnose and correct it in future model designs.
This could lead to more reliable chatbots that stay on topic during long conversations, medical AI systems that reason more fairly when handling a trove of patient data, and code assistants that pay closer attention to all parts of a program.
“These models are black boxes, so as an LLM user, you probably don’t know that position bias can cause your model to be inconsistent. You just feed it your documents in whatever order you want and expect it to work. But by understanding the underlying mechanism of these black-box models better, we can improve them by addressing these limitations,” says Xinyi Wu, a graduate student in the MIT Institute for Data, Systems, and Society (IDSS) and the Laboratory for Information and Decision Systems (LIDS), and first author of a paper on this research.
Her co-authors include Yifei Wang, an MIT postdoc; and senior authors Stefanie Jegelka, an associate professor of electrical engineering and computer science (EECS) and a member of IDSS and the Computer Science and Artificial Intelligence Laboratory (CSAIL); and Ali Jadbabaie, professor and head of the Department of Civil and Environmental Engineering, a core faculty member of IDSS, and a principal investigator in LIDS. The research will be presented at the International Conference on Machine Learning.
Analyzing attention
LLMs like Claude, Llama, and GPT-4 are powered by a type of neural network architecture known as a transformer. Transformers are designed to process sequential data, encoding a sentence into chunks called tokens and then learning the relationships between tokens to predict what words comes next.
These models have gotten very good at this because of the attention mechanism, which uses interconnected layers of data processing nodes to make sense of context by allowing tokens to selectively focus on, or attend to, related tokens.
But if every token can attend to every other token in a 30-page document, that quickly becomes computationally intractable. So, when engineers build transformer models, they often employ attention masking techniques which limit the words a token can attend to.
For instance, a causal mask only allows words to attend to those that came before it.
Engineers also use positional encodings to help the model understand the location of each word in a sentence, improving performance.
The MIT researchers built a graph-based theoretical framework to explore how these modeling choices, attention masks and positional encodings, could affect position bias.
“Everything is coupled and tangled within the attention mechanism, so it is very hard to study. Graphs are a flexible language to describe the dependent relationship among words within the attention mechanism and trace them across multiple layers,” Wu says.
Their theoretical analysis suggested that causal masking gives the model an inherent bias toward the beginning of an input, even when that bias doesn’t exist in the data.
If the earlier words are relatively unimportant for a sentence’s meaning, causal masking can cause the transformer to pay more attention to its beginning anyway.
“While it is often true that earlier words and later words in a sentence are more important, if an LLM is used on a task that is not natural language generation, like ranking or information retrieval, these biases can be extremely harmful,” Wu says.
As a model grows, with additional layers of attention mechanism, this bias is amplified because earlier parts of the input are used more frequently in the model’s reasoning process.
They also found that using positional encodings to link words more strongly to nearby words can mitigate position bias. The technique refocuses the model’s attention in the right place, but its effect can be diluted in models with more attention layers.
And these design choices are only one cause of position bias — some can come from training data the model uses to learn how to prioritize words in a sequence.
“If you know your data are biased in a certain way, then you should also finetune your model on top of adjusting your modeling choices,” Wu says.
Lost in the middle
After they’d established a theoretical framework, the researchers performed experiments in which they systematically varied the position of the correct answer in text sequences for an information retrieval task.
The experiments showed a “lost-in-the-middle” phenomenon, where retrieval accuracy followed a U-shaped pattern. Models performed best if the right answer was located at the beginning of the sequence. Performance declined the closer it got to the middle before rebounding a bit if the correct answer was near the end.
Ultimately, their work suggests that using a different masking technique, removing extra layers from the attention mechanism, or strategically employing positional encodings could reduce position bias and improve a model’s accuracy.
“By doing a combination of theory and experiments, we were able to look at the consequences of model design choices that weren’t clear at the time. If you want to use a model in high-stakes applications, you must know when it will work, when it won’t, and why,” Jadbabaie says.
In the future, the researchers want to further explore the effects of positional encodings and study how position bias could be strategically exploited in certain applications.
“These researchers offer a rare theoretical lens into the attention mechanism at the heart of the transformer model. They provide a compelling analysis that clarifies longstanding quirks in transformer behavior, showing that attention mechanisms, especially with causal masks, inherently bias models toward the beginning of sequences. The paper achieves the best of both worlds — mathematical clarity paired with insights that reach into the guts of real-world systems,” says Amin Saberi, professor and director of the Stanford University Center for Computational Market Design, who was not involved with this work.
This research is supported, in part, by the U.S. Office of Naval Research, the National Science Foundation, and an Alexander von Humboldt Professorship.
© Credit: MIT News; iStock
Onion holds up mirror; society flashes big smile (with green stuff in teeth)

Christine Wenc.
Photo by Alexander Andre
Onion holds up mirror; society flashes big smile (with green stuff in teeth)
How some students at University of Wisconsin-Madison created satiric cultural institution
Liz Mineo
Harvard Staff Writer

The Onion has been making fun of human folly since its founding by two undergrads at the University of Wisconsin-Madison in 1988. The mock news site has created satiric pieces so smart some believed them real, others that were just plain silly, and one headline (“‘No Way to Prevent This,’ Says Only Nation Where This Regularly Happens”) that has achieved a dark fame after being reposted after each U.S. mass shooting since 2014 Isla Vista, Calif., attack.
In this edited interview, Christine Wenc, A.M. ’08, talks about her new book “Funny Because It’s True,” on the origins of the newspaper that proclaims itself “America’s Finest News Source.” Wenc spoke about the legacy of The Onion, now based in Chicago, how it created modern news satire, and why it is revered as a cultural institution.
You were part of the original staff of The Onion. What drew you in?
I was 19 years old, and Tim Keck, who founded The Onion in 1988, was my one of my roommates. Tim was a college sophomore and the youngest child of a Midwestern newspaper family. He needed money, and with his friend Chris Johnson, decided to start a college newspaper. When they asked me to join in, I was like, “Sure” because that’s just what you do when you’re 19 years old.
Tim recruited a bunch of humanities majors, and one of them, who was an improv comedian, suggested that the paper should just be all made-up stories, and that’s how it happened. I left a couple of years later, with other staff, to become the editor of Seattle’s alternative weekly, The Stranger, which was also started by Keck.
How did The Onion develop its distinctive satirical voice?
It didn’t find its voice until later on. It was a parody of the National Enquirer in the beginning. The first headline was “Mendota Monster Mauls Madison” about a monster that had been sighted in Lake Mendota, near UW-Madison.
It was in the mid-1990s when a new staff developed that dry, satirical voice we now recognize in The Onion. Around that time, the writers argued about the paper’s mission and decided that it would critique society from a progressive point of view.
“But from the very beginning, The Onion endeavored to make fun of human foolishness. Its motto was ‘Tu Stultus Es,’ ‘You Are Dumb’ in Latin.”
The original members were from Wisconsin and shared a working-class background. Madison is known as the Berkeley of the Midwest, and it was that microclimate, a sort of Midwestern progressive underdog spirit, that infused The Onion’s satire.
But from the very beginning, The Onion endeavored to make fun of human foolishness. Its motto was “Tu Stultus Es,” “You Are Dumb” in Latin.
The Onion’s intent was never to be political. The point was to entertain, but its humor had to follow certain rules, such as we don’t make fun of women, we make fun of sexism; we don’t make fun of Black people, we make fun of racism. Pointing out things like racism and sexism was pointing at humans being stupid.
There have been cases in which The Onion’s satire was not understood, and some readers believed those made-up stories to be true. Was that a concern?
We were always aware that there were people who couldn’t tell the difference between The Onion’s news satire and real news. There are stories that were clearly jokes that were believed to be real.
For example, a piece that the Chinese media picked up about the U.S. Congress demanding to build a new Capitol; another on how the Harry Potter books were responsible for increasing Satanism; and one that announced the opening of a $8 billion Abortionplex by Planned Parenthood. The latter was reposted by a conservative politician.
Satire is a literary art form, and The Onion writers were a bunch of creative, artistic, progressive weirdos who were working outside the system and kept an independent point of view, which allowed them to see things that others couldn’t or didn’t want to see.
For instance, The Onion didn’t fall for the false claim of weapons of mass destruction during the Gulf War; headline after headline, it made fun of it while the mainstream media acted like court stenographers repeating what they were being told by the administration. It turned out that The Onion was correct.
Recent real news headlines look like they have come out of The Onion. What can a satirical newspaper do when reality seems so bizarre?
Right now, the real world indeed seems like an Onion headline because people in the real world are behaving like people in Onion stories. There is a level of respect or propriety that has just been erased in the behavior of a lot of public figures.
On the other hand, I think The Onion’s success lies in its unique kind of humor, which juxtaposes a straight news format that faithfully mimics the dry style of an Associated Press story and ridiculous content. It’s the juxtaposition between the matter-of-fact tone and the crazy stories that makes it funny. What The Onion does is news satire — good fake news — to point at the wrongs of the world to make it better. The Onion never did fake news, which is manufactured as propaganda to sow chaos and make people afraid.
What impact do you think The Onion has had on modern news satire?
“Satire has always used humor to point out the world’s injustices.”
It helped create American modern news satire. Satire has always used humor to point out the world’s injustices. It’s one of the few rhetorical devices that is effective against spin or manipulation. By poking fun at the flaws of humans and society, satire just has a way to expose the absurdities of life. It’s more than telling jokes.
The Onion is held in very high esteem in the comedy world. Many Onion writers have gone to work for comedy shows, including Jon Stewart’s “Daily Show” and “The Colbert Report” and others.
I’m glad to see The Onion alive. First, it was Generation X, which started The Onion, then the millennials behind the Onion News Network, a spoof of CNN and Fox News. Ben Collins, The Onion CEO, keeps saying that The Onion can say stuff in a way that the real news can’t. And that, to me, is part of the progressive tradition that inspired the original founders.
I think that humor and satire can also be survival mechanisms and a morale builder. Which is not nothing, especially in difficult times. Satire says that you are not the only one who thinks what’s going on is ridiculous — and it is always helpful to know you are not alone.
This compact, low-power receiver could give a boost to 5G smart devices
MIT researchers have designed a compact, low-power receiver for 5G-compatible smart devices that is about 30 times more resilient to a certain type of interference than some traditional wireless receivers.
The low-cost receiver would be ideal for battery-powered internet of things (IoT) devices like environmental sensors, smart thermostats, or other devices that need to run continuously for a long time, such as health wearables, smart cameras, or industrial monitoring sensors.
The researchers’ chip uses a passive filtering mechanism that consumes less than a milliwatt of static power while protecting both the input and output of the receiver’s amplifier from unwanted wireless signals that could jam the device.
Key to the new approach is a novel arrangement of precharged, stacked capacitors, which are connected by a network of tiny switches. These miniscule switches need much less power to be turned on and off than those typically used in IoT receivers.
The receiver’s capacitor network and amplifier are carefully arranged to leverage a phenomenon in amplification that allows the chip to use much smaller capacitors than would typically be necessary.
“This receiver could help expand the capabilities of IoT gadgets. Smart devices like health monitors or industrial sensors could become smaller and have longer battery lives. They would also be more reliable in crowded radio environments, such as factory floors or smart city networks,” says Soroush Araei, an electrical engineering and computer science (EECS) graduate student at MIT and lead author of a paper on the receiver.
He is joined on the paper by Mohammad Barzgari, a postdoc in the MIT Research Laboratory of Electronics (RLE); Haibo Yang, an EECS graduate student; and senior author Negar Reiskarimian, the X-Window Consortium Career Development Assistant Professor in EECS at MIT and a member of the Microsystems Technology Laboratories and RLE. The research was recently presented at the IEEE Radio Frequency Integrated Circuits Symposium.
A new standard
A receiver acts as the intermediary between an IoT device and its environment. Its job is to detect and amplify a wireless signal, filter out any interference, and then convert it into digital data for processing.
Traditionally, IoT receivers operate on fixed frequencies and suppress interference using a single narrow-band filter, which is simple and inexpensive.
But the new technical specifications of the 5G mobile network enable reduced-capability devices that are more affordable and energy-efficient. This opens a range of IoT applications to the faster data speeds and increased network capability of 5G. These next-generation IoT devices need receivers that can tune across a wide range of frequencies while still being cost-effective and low-power.
“This is extremely challenging because now we need to not only think about the power and cost of the receiver, but also flexibility to address numerous interferers that exist in the environment,” Araei says.
To reduce the size, cost, and power consumption of an IoT device, engineers can’t rely on the bulky, off-chip filters that are typically used in devices that operate on a wide frequency range.
One solution is to use a network of on-chip capacitors that can filter out unwanted signals. But these capacitor networks are prone to special type of signal noise known as harmonic interference.
In prior work, the MIT researchers developed a novel switch-capacitor network that targets these harmonic signals as early as possible in the receiver chain, filtering out unwanted signals before they are amplified and converted into digital bits for processing.
Shrinking the circuit
Here, they extended that approach by using the novel switch-capacitor network as the feedback path in an amplifier with negative gain. This configuration leverages the Miller effect, a phenomenon that enables small capacitors to behave like much larger ones.
“This trick lets us meet the filtering requirement for narrow-band IoT without physically large components, which drastically shrinks the size of the circuit,” Araei says.
Their receiver has an active area of less than 0.05 square millimeters.
One challenge the researchers had to overcome was determining how to apply enough voltage to drive the switches while keeping the overall power supply of the chip at only 0.6 volts.
In the presence of interfering signals, such tiny switches can turn on and off in error, especially if the voltage required for switching is extremely low.
To address this, the researchers came up with a novel solution, using a special circuit technique called bootstrap clocking. This method boosts the control voltage just enough to ensure the switches operate reliably while using less power and fewer components than traditional clock boosting methods.
Taken together, these innovations enable the new receiver to consume less than a milliwatt of power while blocking about 30 times more harmonic interference than traditional IoT receivers.
“Our chip also is very quiet, in terms of not polluting the airwaves. This comes from the fact that our switches are very small, so the amount of signal that can leak out of the antenna is also very small,” Araei adds.
Because their receiver is smaller than traditional devices and relies on switches and precharged capacitors instead of more complex electronics, it could be more cost-effective to fabricate. In addition, since the receiver design can cover a wide range of signal frequencies, it could be implemented on a variety of current and future IoT devices.
Now that they have developed this prototype, the researchers want to enable the receiver to operate without a dedicated power supply, perhaps by harvesting Wi-Fi or Bluetooth signals from the environment to power the chip.
This research is supported, in part, by the National Science Foundation.
© Credit: iStock
How market reactions to recent U.S. tariffs hint at start of global shift for nation
How market reactions to recent U.S. tariffs hint at start of global shift for nation

Christy DeSmith
Harvard Staff Writer
Economist updates literature on optimal American import-tax rate in world of interconnected trade, investment
President Trump’s tariffs, announced on April 2, upset the global economy in new ways.
“The financial meltdown they triggered was really striking,” said Oleg Itskhoki, a professor of economics. “What happened to the stock market, what happened to bond yields, what happened to the dollar exchange rate. They’re all connected. You can’t study tariffs anymore without considering what happens in the financial market.”
In a new working paper, Itskhoki and longtime collaborator Dmitry Mukhin of the London School of Economics explore what they call “the optimal macro tariff,” or the import tax rate most favorable to U.S. economic interests. The international macroeconomists are known for their work detailing how today’s globalized financial market drives currency valuations. Now they’ve expanded that approach to study tariffs for a variety of U.S. policy objectives.
The academic literature was due for an update. The last time the world saw tariffs on this scale was the 1930s, when countries including the U.S. sought to protect jobs amid the Great Depression’s high unemployment.
“There was no wave of protectionism after the Great Financial Crisis of 2008 and ’09, when unemployment in the U.S. exceeded 10 percent,” Itskhoki noted. “It seemed like the developed world had shifted to an equilibrium without tariffs.”
We sat down with Itskhoki recently to ask how tariffs function in a world of deeply interconnected trade and investment. The conversation was edited for length and clarity.
What, exactly, is an “optimal macro tariff”?
The economic literature on optimal tariffs typically asks, “What policy gives a country the most favorable terms of trade with the rest of the world?” That literature typically assumes trade balance, but the last time the U.S. had anything close to balanced trade was 1991 or ’92.
At the same time, macroeconomists tend to think about trade imbalances without considering tariffs too much. What we do in this paper is combine the two.
You’ve bridged two very different traditions within economics.
Yes. Because we’ve seen not only the globalization of trade, we’ve also seen the globalization of financial markets with countries holding large portfolios of foreign assets. As it turns out, this is consequential for the optimal tariff.
Your paper focuses on optimal macro tariffs for the U.S. What should we know about the country’s place in the global economy at this very moment?
Macroeconomic research over the last 20 years focused on what is sometimes called the “exorbitant privilege” of the United States.
The country may have had a persistent trade deficit and consequently accumulated fewer assets than liabilities. But its foreign assets tended to be of the riskier sort, like direct foreign investments and portfolio holdings. They generated high returns relative to liabilities — which are, to a large extent, U.S. Treasuries.
And the federal government enjoyed paying low returns on U.S. Treasuries until recently because they were viewed as the world’s safest asset. That’s what allowed the U.S. to run a trade deficit and the U.S. government to run a large fiscal deficit without dire financial consequences.
Higher interest rates mean that required yields on U.S. Treasuries are now quite high, so the government can no longer borrow cheaply. Interest rate payments on federal debt are now around half the country’s massive fiscal deficit, by itself larger than the trade deficit.
We typically see developing countries going into periods of big trade deficits and big fiscal deficits. It’s very unusual to find the world’s dominant country in this position.

Oleg Itskhoki.
Veasey Conway/Harvard Staff Photographer
What happens when you add tariffs to the mix?
The dollar appreciated, in line with theoretical predictions, with most previous tariff announcements. With tariffs, Americans buy fewer imports. Less foreign exchange is needed to pay for them, so there are more dollars left over, and the currency becomes stronger. That, in turn, hurts U.S. exporters, because American goods became more expensive overseas. Hence, foreigners buy less, resulting in a new equilibrium with less trade on both sides.
In addition, dollar appreciations are akin to a financial transfer from the U.S. to the parts of the world that hold U.S. assets — the so-called “valuation effects.”
Therefore, in a financially globalized world, the optimal tariff for the U.S. is smaller than in previous eras. Furthermore, holdings of U.S. assets offer an effective insurance for countries like China and Japan against a possible trade war with the U.S.
But that’s not what played out after April 2. Instead, we saw a depreciating U.S. dollar. Why?
This was surprising indeed. Dollar depreciation happened along with a large meltdown in the U.S. stock market and increasing yields on U.S. Treasuries. At first there was a theory that foreigners were dumping Treasuries. But in reality, there was not much else for them to buy. Maybe they wanted to sell U.S. Treasuries and buy, say, German Bunds of equal quality. In reality, there are 10 times fewer German bunds than U.S. Treasuries out there today, making it difficult to shift portfolios away from the U.S. assets.
Instead what we saw was a clear turn in the currency market. In the past, Asian investors in particular but also European investors to some extent were willing to buy U.S. Treasuries without holding currency insurance. The market expected the U.S. dollar to always appreciate in bad economic times. But April 2 was the first time the dollar massively depreciated in bad times, as global markets turned to pessimism on the announcement of the trade war.
The U.S. dollar now resembles the British pound following the 2016 Brexit vote. Before April 2, Japanese pension funds, for example, may have been willing to hold U.S. assets without buying currency insurance. Now they want to sell that risk of U.S. dollar depreciation to the market. And the required premium for selling that risk resulted in a weaker dollar.
Has the U.S. benefited at all from the trade war?
Well, the U.S. is collecting tariff revenues. But for these very immediate, and very small, monetary gains, the government has potentially triggered a much bigger process that will eliminate some of the benefits the country has enjoyed. I mean, you can call French President Emmanuel Macron or U.K. Prime Minister Keir Starmer to negotiate on tariffs. But you cannot call up the financial market and tell it to have faith in the dollar.
“It doesn’t mean the U.S. will immediately lose its central place in the global financial market, but it is clear that the tariffs marked the start of some sort of realignment.”
It doesn’t mean the U.S. will immediately lose its central place in the global financial market, but it is clear that the tariffs marked the start of some sort of realignment. The fact that we’re discussing a tax bill that is meant to increase the deficit, in this environment, is just insane.
What should laypeople know about the model you’ve constructed to study optimal macro tariffs?
The model provides a formalized environment where you can ask questions and get coherent answers. You can play around with different objectives, like raising revenue or boosting manufacturing employment. We found that indeed there is an optimal tariff for the U.S. — somewhere between 25 and 35 percent — if one ignores the financial market.
But even then, it only works if the government convinces the rest of the world not to retaliate, because there’s a much bigger loss if everybody starts doing tariffs. That’s pretty much how the world lived before the Second World War, before all that collective effort was done to bring down tariffs.
Suddenly, the very myopic optimal tariff has won the day once more. Once we factor in the financial market, the optimal tariff is actually much smaller, at something like 9 percent. And this only takes into account the direct financial losses from valuation effects, without capturing the consequences of the U.S. losing its dominance in the global financial market.
You mentioned playing around with different policy objectives. According to your model, what is the optimal tariff for boosting employment in U.S. manufacturing?
We really thought there would be an optimal tariff for manufacturing employment. It shows you how biased we are. Because make no mistake, tariffs are a trade tax. They reduce the size of the tradable sector, meaning they reduce both imports and exports. It’s true that increased trade with China hurt U.S. manufacturing. But today, a tariff on trade with China will hurt U.S. manufacturing even further.
If the goal is boosting tradable employment, what you actually want are subsidies. Maybe particular regions are targeted. Maybe we decide that certain industries are important — for security, for defense, for maintaining our technological leadership. Ideally, U.S. society would need to decide through the democratic process what activities to subsidize within a balanced budget, given the high costs of borrowing right now. But we are obviously very far from this ideal.
‘Truly the best’

Photos by Veasey Conway/Harvard Staff Photographer
‘Truly the best’
Clea Simon
Harvard Correspondent
65 staffers honored as ‘Harvard Heroes’ for ‘exemplary’ service to mission
The mood was joyous as family and friends packed into Sanders Theatre celebrated 65 “Harvard Heroes” from across the University on Thursday. Nominated and selected by their peers, these staff members were introduced by the heads of their departments, divisions, or Schools. Their achievements were highlighted in brief and often touchingly personal remarks by President Alan M. Garber.
Processing in the theater to the sounds of Janelle Monae’s cover of David Bowie’s “Heroes,” the 65 honorees were hailed by Executive Vice President Meredith Weenick as “truly the best.”
“In many ways, large and small, these individuals go above and beyond in service of Harvard’s mission and in support of its people,” she said.
Following an introductory video, which noted, among other facts, that only one-half of 1 percent of Harvard employees are named Harvard Heroes, Vice President for Human Resources Manuel Cuevas-Trisán praised the honorees’ “exemplary efforts.”
Garber, who entered to a prolonged standing ovation, thanked the assembled honorees for their service: “You are the ones who don’t give up, who keep showing up, because you believe in the importance of making real, positive changes — for students, for colleagues, and for the wider world.”
Alumni Affairs and Development
Honorees: Kelly Hahn, Director of Content Strategy; John Prince, Associate Director, Reunions and Classes
Cheers greeted Garber’s highlighting of Hahn’s accomplishments when he interrupted his scripted remarks praising her work creating a centralized resource “amidst complex current events” to note, “There’s an understatement!”
Harvard Graduate School of Education
Honorees: Andrea Le, Associate Director, Community Building and International Student Support; Allison Pingree, Associate Director, Instructional Support and Development; Faina Gould, Senior Research Development Manager
“You’ve been the best kind of action hero for vital research initiatives in a quickly evolving grants landscape,” said Garber, praising Gould.
Harvard Business School
Honorees: Madeline Meehan, Director, Campus Activation; Katia Muser, Senior Director, Software Delivery Excellence; Robin Smith, Senior Manager Support Services
Noting Smith’s extracurricular work as “a theater actor,” Garber concluded “your empathetic and empowering style takes center stage.”
Harvard Public Affairs and Communications
Honoree: Senior Writer Alvin Powell
“Thank you for chronicling Harvard’s history,” said Garber, “one article at a time.”
Campus Services
Honorees: Timothy Allen, Crew Chief A; Associate Director Matthew Civittolo; Generator Mechanic/Working Foreman Aaron Mayerson; Associate Director of Biosafety Angela Reid; Crimson Catering and Event Services Director Kyle Ronayne; Senior Accounting Manager Angelina Yun
Noting that “Commencement and Reunions require 121 tents, 5,859 tables, and 66,740 chairs,” in his address to Ronayne, Garber concluded: “Everything falls into place because of just one of you.”
Harvard Federal Credit Union
Honoree: Lorraine Gadsby, Lead Member Services Representative
Praising her “pragmatic approach to tough situations” and “genuine consideration for others,” over nearly 40 years at Harvard, Garber also saluted Gadsby’s “thoughtful Saturday pastries.”

President Alan M. Garber offered personal remarks for each honoree.

“Harvard Heroes” are nominated from across the University by their peers.

An enthusiastic audience.
Harvard Medical School
Honorees: Safiya Bobb, Associate Director, Program Operations Executive; Susanne Churchill, Executive Director for Biomedical Informatics; Faculty Affairs Academic Appointments Manager Mindy Dellert; Christina Kennedy, Strategic Projects Manager for the Office of Research Administration; Veronica Leo, Program Manager for Academic Advancement; Jennifer Puccetti, Executive Director, Global Health and Social Medicine; Livia Rizzo, MEDscience Senior Associate Director
Indulging in a little word play as he highlighted Bobb’s service, Garber said, “Because your leadership balances business goals with staff needs, the Asynchronous Operations team is always in sync.”
Harvard School of Dental Medicine
Honorees: Academic Societies Coordinator Adrien Doherty; Carrie Sylven, Director of Student Affairs
Garber praised Sylven’s “stress-busting ping-pong tournaments.”
Harvard T.H. Chan School of Public Health
Honorees: Environmental Health Project Coordinator Jeffrey Adams; Henrique Coelho, Assistant Director for PPCR Program Administration; Assistant Director, MPH Generalist and Cross-MPH Programming Megan Kerin; Immunology and Infectious Diseases Director of Administration Marie Richard
Highlighting Adams, “a public health superman,” Garber noted his ability to “neutralize longstanding Institutional Review Board challenges with a single email.”
Harvard University Health Services
Honorees: Jason Ward, Director of Health Plan Operations and Member Services; Marie Haley, Primary Care Physician/PCP Team Leader
Calling Ward “the Tom Brady of Member Services,” Garber concluded “peers, patients, and providers are glad you’re always on the ball.”
Harvard Graduate School of Design
Honoree: Keith Gnoza, Director of Financial Assistance/Assistant Director of Student Services
Gnoza’s “genuine empathy and engagement through the Student Emergency Fund reassure and support students experiencing unexpected hardship,” said Garber.
Harvard Divinity School
Honoree: Senior Graphic Designer Kristie Welsh
Garber praised Welsh’s “brilliant designs,” noting, “through your commitment and creativity, the School shares its scholarship and good work with audiences far and wide.”
Faculty of Arts and Sciences
Honorees: Associate Dean of Students Lauren Brandt; Ethan Contini-Field, Manager of Asynchronous Course Development; Front Office Manager Kai Crull; Physics Executive Director Despina Bokios; IT Client Support Services Associate Roy Guyton; Business Systems Analyst Raiyan Huq; Magdelena Kenar, Associate Director of Faculty Support Services; Men’s and Women’s Cross Country/Track and Field Associate Head Coach Marc Mangiacotti; Alta Mauro, Associate Dean of Students for Inclusion and Belonging; Paul Rattigan, Senior Concert Piano Technician; Rachel Rockenmacher, Executive Director of the Center for Jewish Studies; Sheila Thimba, Dean for Administration and Finance; Lu Wang, Assistant Director of Undergraduate Studies; Lawrence White, Operations Director for Neuroimaging at the Center for Brain Science
In addition to serious praise for all the honorees, Garber noted Crull’s “outsized hospitality for Remy the cat,” a campus feline fixture.
Harvard Radcliffe Institute
Honoree: Amanda Lubniewski, Head of Student Engagement
“With authenticity, empathy, and attentiveness, you give students indelible experiences,” said Garber.
John A. Paulson School of Engineering and Applied Sciences
Honoree: Leslie Schaffer, Associate Dean for Finance
“You turn ambitious ideas into reality,” said Garber.
Harvard Law School
Honorees: Emily Newburger, Executive Editor of the Harvard Law Bulletin; Jacqueline Calahong, Staff Assistant, Emmett Environmental Law and Policy Clinic
Praising Calahong, Garber said, “In all you do, you’re building a better climate for the clinic and, ultimately, for the Earth.”
Harvard Library
Honorees: Research Data Services Librarian Julie Goldman; Sarah Hoke, Librarian for Collection Development Management; Juliana Kuipers, Associate University Archivist for Collection Development and Records Management Services
In his commendation of Hoke, Garber said, “With practical positivity and a heart for service, you’re expanding access to engaging library collections.”
Harvard University Information Technology
Honorees: David Heitmeyer, Director, Academic Platform Development; Senior Technical Support Engineer Cheryl Johnson; David Sobel, Associate Director for FAS Technology Strategy and Planning
Highlighting Heitmeyer’s 25 years of service, which includes introducing international students to “barbecue and Dr. Pepper,” Garber said: “Building a sense of belonging is in your core architecture.”
Harvard Kennedy School
Honorees: Financial Associate Dawn Hannon; Community Engagement Librarian Alessandra Seiter
“When challenges add up, colleagues are grateful they can count on you,” Garber said of Hannon.
Financial Administration
Honorees: Manager of Administration and Operations Alissa Beideck Landry; Portfolio Team Manager Rebecca Looman
Citing Landry’s oversight of the department’s recent move, Garber said, “Your adaptability, selflessness, and poise under pressure inspire colleagues to follow your lead.”
Memorial Church
Honoree: Director of Finance and Operations Charles Anderson
Garber called Anderson “a blessing to the Mem Church mission of educating minds, expanding hearts, and enriching lives.”
Office of the President
Honoree: Carrie Virusso, Receptionist/Staff Assistant
“In such a dynamic, demanding place, many people — including this president — are grateful that your steady presence keeps things sailing smoothly,” said Garber.
Office of the Vice Provost for Advances in Learning
Honoree: Zachary Wang, Director of Strategic Technology
Naming Wang’s work launching the Learning Experience Platform, Garber said: “Your tireless efforts are revolutionizing teaching and learning University-wide.”
Human Resources
Honoree: Kristina Paolini, Talent Acquisition and Outreach for Human Resources
“Staffing a world-class institution is demanding work, but your KSA’s” — knowledge, skills, and abilities — “are leading the way,” said Garber, before exhorting all the honorees to rise for yet another round of enthusiastic cheers and applause.
Projects help students ‘build bridges’ across differences
Projects help students ‘build bridges’ across differences

Julie McDonough
Harvard Correspondent
Online games and small group discussions provide opportunities for people with contrasting points of view to engage
Funded through the President’s Building Bridges Fund — an initiative started last fall to respond to the preliminary recommendations of the Presidential Task Forces — four student groups launched projects this spring to foster constructive dialogue and build relationships across differences. The projects are part of a larger, University-wide effort to promote dialogue across difference.
The student organizers — from Harvard College, Harvard Law School, and the Kenneth C. Griffin Graduate School of Arts and Sciences — took varying approaches to engaging fellow students in meaningful conversations on challenging topics and creating frameworks for productive discussions.
“When we established the fund, we hoped that we’d attract pilot projects that would spark deep and meaningful conversations across campus,” said President Alan M. Garber. “We want to ensure that all voices are heard at Harvard, and this first round of efforts, launched by accomplished student leaders, gave members of our community opportunities not only to engage in constructive dialogue but also to develop skills that will serve them well in many other encounters.”
“When we established the fund, we hoped that we’d attract pilot projects that would spark deep and meaningful conversations across campus.”
Alan M. Garber
Awarded funding in February to implement projects by the end of the academic year, the students quickly organized speakers, set up logistics, and worked with faculty and staff for guidance and assistance.
Here is a closer look at the projects completed so far.
The Policy Bridges Project
The idea for the project came from a discussion within the Harvard Griffin GSAS Science Policy Group. The group was searching for a way to enable substantive policy discussions and ideas around divisive issues that demand action. When the Building Bridges funding was announced, they saw an opportunity to engage fellow students on these topics with the hope of finding pathways forward.
The Ph.D. student organizers Arya Kaul (bioinformatics and integrative genomics), Lissah Johnson (biological sciences in public health), and Mia Sievers (biological and biomedical sciences) organized two events: a Climate Policy Fireside Chat and a Technology Policy Panel. The format for both events included inviting outside speakers to share their opinions and expertise, while allowing time for discussion and more informal conversation at post-event gatherings.
The Climate Policy Fireside Chat featured Undersecretary Katherine Antos from the Massachusetts Executive Office of Environmental Affairs. The discussion focused on how to move climate policy forward by identifying shared values among groups with different viewpoints.
“We learned that it is possible to make progress on climate issues when you search for common values,” said Kaul. “Forming partnerships with groups that have different viewpoints is possible when you understand what is important to them.”

Betsy Miller.
Photo by Ricardo Lopez

Betsy Miller, Will Rinehart, and Bruce Schneier.
Photo by Ricardo Lopez
The Technology Policy Panel featured Bruce Schneier, lecturer in public policy at the Harvard Kennedy School and a fellow at the Berkman Klein Center for Internet & Society, and Will Rinehart, senior fellow at the American Enterprise Institute and expert at the Federalist Society’s Emerging Technology Working Group, as panelists. The highlight of the panel was the exploration of “polarity thinking” with moderator Betsy Miller.
“It is very helpful to have a shared language and framework for provocative discussions like these,” said Miller, lecturer on law at Harvard Law School. “Polarities are opposites where the benefits of both are needed over time to succeed. Polarities help us move from ‘either/or’ binary thinking to a ‘both/and’ mindset. It’s a powerful tool for engaging across difference and having difficult conversations with curiosity and respect.”
With two events complete, the hope is to continue to host discussions on a variety of policy topics.
“We would love to be able to explore different areas, like healthcare policy, to find those shared values that help us to make progress,” said Johnson. “And with the polarity-thinking framework, we can have those hard conversations in ways that are productive.”
Tango Project
With the announcement of the President’s Building Bridges Fund grant program, Lucas Woodley, a Ph.D. student in psychology at the Griffin GSAS, who works closely with Psychology Professor Joshua Greene in his lab, saw an opportunity for a large-scale deployment of Tango at Harvard. The online game developed in the lab over the past five years promotes openness, respect, and connection across lines of division via a cooperative online quiz game. Here’s how it works:
- Online, 20-minute games are scheduled for a set date and time.
- On the set date and time, players log in and are paired with an online partner anonymously.
- Together, the student pairs answer quiz questions ranging from pop culture to Harvard history to politically charged topics.
Funding through the President’s Building Bridges Fund allowed Woodley, in collaboration with Greene, to coordinate an event across the Harvard community with incentives for participation. The winning pair received Celtics playoffs tickets, and the winning House got $1,000 for their activities fund.
Participating students self-identified as varying grades of liberal or conservative. They were paired randomly with their partners, often playing with someone of differing views. The game is designed to help people with contrasting worldviews learn to trust and respect each other. “What we find is that students have so much fun playing the game,” said Woodley, noting that half of the players gave Tango a perfect 10 out of 10 for enjoyment. “At the same time, they learn how to engage with others on really challenging topics.”

Lucas Woodley (left) and Joshua D. Greene.
Photo by Dylan Goodman
Greene and Woodley recently published the results of the Tango experiment in Nature Human Behavior.
“Tango has so many beneficial outcomes for students in terms of making connections across differences,” said Greene. “Research from the Tango event at Harvard showed that after playing, students were significantly more interested in getting to know students with different viewpoints and significantly more comfortable voicing controversial views on campus.”
Greene and Woodley hope that Tango can be used regularly at moments when large groups of students come together at Harvard, such as at first-year orientation.
Hate & Remediation: Where Does Harvard Go from Here?
The project organized by students at Harvard Law School aimed to spark dialogue among students with different cultural and religious backgrounds. Recognizing the need for a space where students could come together to talk about issues on campus, the Jewish and Muslim student organizers sponsored a discussion examining the nature of hate and how to meaningfully address it on campus.

Randall Kennedy (left) and Noah Feldman.
Photo by Lorin Granger
The panel discussion included Noah Feldman, Felix Frankfurter Professor of Law, chair of the Society of Fellows, and founding director of the Julis-Rabinowitz Program on Jewish and Israeli Law at Harvard Law School, and Randall Kennedy, Michael R. Klein Professor at Harvard Law School. Given the then-expected release of the final reports from the Presidential Task Forces on Antisemitism and Anti-Israeli Bias and Anti-Muslim, Anti-Arab, and Anti-Palestinian Bias, they framed the talk as an opportunity for students to think beyond the standard legal remedies and to consider how they as a community could move forward.
More than 100 students participated in the lecture and follow-up discussion. “We worked really hard to create an event where people felt comfortable and welcome,” said Omar Tariq, a second-year law student. “We had students from a wide range of political, religious, and cultural backgrounds. Students expressed that they’ve been looking for more meaningful discussion of the issues we’re facing on campus and were appreciative of our event.”
The event focused on three goals: building relationships across divergent affinity groups; acting against discrimination, bullying, harassment, or hate; and fostering constructive dialogue. The students felt they achieved their goals given the turnout, discussion, and feedback afterward. The strong relationships built while planning the event provides a solid foundation for future events and collaborations, they said.
“This event was a good opportunity for people coming in with different beliefs and ideas to come together with a desire to make campus better. We got to explore how even though we may disagree deeply, it doesn’t mean we can’t also try to make things better for everyone on campus,” said Shanee Markovitz Kay, a second-year law student. “If you have a shared positive goal, there is a lot of good that can come out of it.”
Questions left unanswered
When the President’s Building Bridges Fund was announced, Harvard College students Irati Evworo Diez ’25, Noa Horowitz ’25, and Ari Kohn ’26 saw it not only as an opportunity to bring together a diverse mix of their classmates, but also as a chance to take advantage of the intellect and expertise of their professors. Following an application process that garnered significant interest, students were selected to take part in a series of dinner conversations. The organizers of the project deliberately chose students from diverse backgrounds, beliefs, and viewpoints with the intention of sparking serious, meaningful, and challenging conversations about issues facing Harvard and the world.
“Sometimes it is difficult to have these types of conversations inside the classroom for a variety of reasons,” said Evworo Diez. “We wanted to give students the chance to have these tough conversations in an intimate setting with a select group of students who would really engage on these challenging topics.”

Post dinner photo with students and faculty guests James Wood and Claire Messud.
Photo by Ari Kohn
The general structure of the dinners was to have Harvard professors and other subject-matter experts attend, share their expertise or points of view on a topic, and then have students engage with each other in conversation and debate. It was meant to be the best of a Harvard seminar with its rigor and depth, but in a more intimate and comfortable setting. The topics ranged from economics to campus life to gender and beyond. The feedback was incredibly positive.
“As student organizers, we were grateful President Garber initiated the fund so that we could provide this experience to students and begin to shift the culture on campus around engaging in constructive dialogue across difference,” said Evworo Diez. “Some students said it was one of the most meaningful experiences they had at Harvard.”
The hope is that the dinner series could continue or could be scaled to a GenEd class so that more students have the opportunity to participate in this type of experience.
“This dinner series gave me a lot of faith in Harvard’s capacity to do this work,” said Evworo Diez. “The current media portrayal of Harvard is so different from what is actually happening here. There is diversity at Harvard in the perspectives that students bring to the table — from their background, faith, culture, political viewpoints, and life experiences. Students and faculty were thankful to be forced to have these hard conversations and they were ready to engage.”
“Full participation and belonging don’t happen by accident; they are cultivated through intentional acts of courage, curiosity, and compassion,” said Sherri Charleston, chief community and campus life officer. “The Building Bridges grantees have shown us that true excellence emerges not in the absence of difference, but in the embrace of it. Their work reminds us that when we create spaces for respectful dialogue, we’re not just exchanging ideas. Together, we’re building the foundation for a Harvard where everyone can thrive.”
Gaspare LoDuca named VP for information systems and technology and CIO
Gaspare LoDuca has been appointed MIT’s vice president for information systems and technology (IS&T) and chief information officer, effective Aug. 18. Currently vice president for information technology and CIO at Columbia University, LoDuca has held IT leadership roles in or related to higher education for more than two decades. He succeeds Mark Silis, who led IS&T from 2019 until 2024, when he left MIT to return to the entrepreneurial ecosystem in the San Francisco Bay area.
Executive Vice President and Treasurer Glen Shor announced the appointment today in an email to MIT faculty and staff.
“I believe that Gaspare will be an incredible asset to MIT, bringing wide-ranging experience supporting faculty, researchers, staff, and students and a highly collaborative style,” says Shor. “He is eager to start his work with our talented IS&T team to chart and implement their contributions to the future of information technology at MIT.”
LoDuca will lead the IS&T organization and oversee MIT’s information technology infrastructure and services that support its research and academic enterprise across student and administrative systems, network operations, cloud services, cybersecurity, and customer support. As co-chair of the Information Technology Governance Committee, he will guide the development of IT policy and strategy at the Institute. He will also play a key role in MIT’s effort to modernize its business processes and administrative systems, working in close collaboration with the Business and Digital Transformation Office.
“Gaspare brings to his new role extensive experience leading a complex IT organization,” says Provost Cynthia Barnhart, who served as one of Shor's advisors during the search process. “His depth of experience, coupled with his vision for the future state of information technology and digital transformation at MIT, are compelling, and I am excited to see the positive impact he will have here.”
“As I start my new role, I plan to learn more about MIT’s culture and community to ensure that any decisions or changes we make are shaped by the community’s needs and carried out in a way that fits the culture. I’m also looking forward to learning more about the research and work being done by students and faculty to advance MIT’s mission. It’s inspiring, and I’m eager to support their success,” says LoDuca.
In his role at Columbia, LoDuca has overseen the IT department, headed IT governance committees for school and department-level IT functions, and ensured the secure operation of the university’s enterprise-class systems since 2015. During his tenure, he has crafted a culture of customer service and innovation — building a new student information system, identifying emerging technologies for use in classrooms and labs, and creating a data-sharing platform for university researchers and a grants dashboard for principal investigators. He also revamped Columbia’s technology infrastructure and implemented tools to ensure the security and reliability of its technology resources.
Before joining Columbia, LoDuca was the technology managing director for the education practice at Accenture from 1998 to 2015. In that role, he helped universities to develop and implement technology strategies and adopt modern applications and systems. His projects included overseeing the implementation of finance, human resources, and student administration systems for clients such as Columbia University, University of Miami, Carnegie Mellon University, the University System of Georgia, and Yale University.
“At a research institution, there’s a wide range of activities happening every day, and our job in IT is to support them all while also managing cybersecurity risks. We need to be creative and thoughtful in our solutions, and consider the needs and expectations of our community,” he says.
LoDuca holds a bachelor’s degree in chemical engineering from Michigan State University. He and his wife are recent empty nesters, and are in the process of relocating to Boston.
© Photo courtesy of Gaspare LoDuca.
Closing in on superconducting semiconductors
In 2023, about 4.4 percent (176 terawatt-hours) of total energy consumption in the United States was by data centers that are essential for processing large quantities of information. Of that 176 TWh, approximately 100 TWh (57 percent) was used by CPU and GPU equipment. Energy requirements have escalated substantially in the past decade and will only continue to grow, making the development of energy-efficient computing crucial.
Superconducting electronics have arisen as a promising alternative for classical and quantum computing, although their full exploitation for high-end computing requires a dramatic reduction in the amount of wiring linking ambient temperature electronics and low-temperature superconducting circuits. To make systems that are both larger and more streamlined, replacing commonplace components such as semiconductors with superconducting versions could be of immense value. It’s a challenge that has captivated MIT Plasma Science and Fusion Center senior research scientist Jagadeesh Moodera and his colleagues, who described a significant breakthrough in a recent Nature Electronics paper, “Efficient superconducting diodes and rectifiers for quantum circuitry.”
Moodera was working on a stubborn problem. One of the critical long-standing requirements is the need for the efficient conversion of AC currents into DC currents on a chip while operating at the extremely cold cryogenic temperatures required for superconductors to work efficiently. For example, in superconducting “energy-efficient rapid single flux quantum” (ERSFQ) circuits, the AC-to-DC issue is limiting ERSFQ scalability and preventing their use in larger circuits with higher complexities. To respond to this need, Moodera and his team created superconducting diode (SD)-based superconducting rectifiers — devices that can convert AC to DC on the same chip. These rectifiers would allow for the efficient delivery of the DC current necessary to operate superconducting classical and quantum processors.
Quantum computer circuits can only operate at temperatures close to 0 kelvins (absolute zero), and the way power is supplied must be carefully controlled to limit the effects of interference introduced by too much heat or electromagnetic noise. Most unwanted noise and heat come from the wires connecting cold quantum chips to room-temperature electronics. Instead, using superconducting rectifiers to convert AC currents into DC within a cryogenic environment reduces the number of wires, cutting down on heat and noise and enabling larger, more stable quantum systems.
In a 2023 experiment, Moodera and his co-authors developed SDs that are made of very thin layers of superconducting material that display nonreciprocal (or unidirectional) flow of current and could be the superconducting counterpart to standard semiconductors. Even though SDs have garnered significant attention, especially since 2020, up until this point the research has focused only on individual SDs for proof of concept. The group’s 2023 paper outlined how they created and refined a method by which SDs could be scaled for broader application.
Now, by building a diode bridge circuit, they demonstrated the successful integration of four SDs and realized AC-to-DC rectification at cryogenic temperatures.
The new approach described in their recent Nature Electronics paper will significantly cut down on the thermal and electromagnetic noise traveling from ambient into cryogenic circuitry, enabling cleaner operation. The SDs could also potentially serve as isolators/circulators, assisting in insulating qubit signals from external influence. The successful assimilation of multiple SDs into the first integrated SD circuit represents a key step toward making superconducting computing a commercial reality.
“Our work opens the door to the arrival of highly energy-efficient, practical superconductivity-based supercomputers in the next few years,” says Moodera. “Moreover, we expect our research to enhance the qubit stability while boosting the quantum computing program, bringing its realization closer." Given the multiple beneficial roles these components could play, Moodera and his team are already working toward the integration of such devices into actual superconducting logic circuits, including in dark matter detection circuits that are essential to the operation of experiments at CERN and LUX-ZEPLIN in at the Berkeley National Lab.
This work was partially funded by MIT Lincoln Laboratory’s Advanced Concepts Committee, the U.S. National Science Foundation, U.S. Army Research Office, and U.S. Air Force Office of Scientific Research.
This work was carried out, in part, through the use of MIT.nano’s facilities.
© Photo: AdobeStock
Closing in on superconducting semiconductors
In 2023, about 4.4 percent (176 terawatt-hours) of total energy consumption in the United States was by data centers that are essential for processing large quantities of information. Of that 176 TWh, approximately 100 TWh (57 percent) was used by CPU and GPU equipment. Energy requirements have escalated substantially in the past decade and will only continue to grow, making the development of energy-efficient computing crucial.
Superconducting electronics have arisen as a promising alternative for classical and quantum computing, although their full exploitation for high-end computing requires a dramatic reduction in the amount of wiring linking ambient temperature electronics and low-temperature superconducting circuits. To make systems that are both larger and more streamlined, replacing commonplace components such as semiconductors with superconducting versions could be of immense value. It’s a challenge that has captivated MIT Plasma Science and Fusion Center senior research scientist Jagadeesh Moodera and his colleagues, who described a significant breakthrough in a recent Nature Electronics paper, “Efficient superconducting diodes and rectifiers for quantum circuitry.”
Moodera was working on a stubborn problem. One of the critical long-standing requirements is the need for the efficient conversion of AC currents into DC currents on a chip while operating at the extremely cold cryogenic temperatures required for superconductors to work efficiently. For example, in superconducting “energy-efficient rapid single flux quantum” (ERSFQ) circuits, the AC-to-DC issue is limiting ERSFQ scalability and preventing their use in larger circuits with higher complexities. To respond to this need, Moodera and his team created superconducting diode (SD)-based superconducting rectifiers — devices that can convert AC to DC on the same chip. These rectifiers would allow for the efficient delivery of the DC current necessary to operate superconducting classical and quantum processors.
Quantum computer circuits can only operate at temperatures close to 0 kelvins (absolute zero), and the way power is supplied must be carefully controlled to limit the effects of interference introduced by too much heat or electromagnetic noise. Most unwanted noise and heat come from the wires connecting cold quantum chips to room-temperature electronics. Instead, using superconducting rectifiers to convert AC currents into DC within a cryogenic environment reduces the number of wires, cutting down on heat and noise and enabling larger, more stable quantum systems.
In a 2023 experiment, Moodera and his co-authors developed SDs that are made of very thin layers of superconducting material that display nonreciprocal (or unidirectional) flow of current and could be the superconducting counterpart to standard semiconductors. Even though SDs have garnered significant attention, especially since 2020, up until this point the research has focused only on individual SDs for proof of concept. The group’s 2023 paper outlined how they created and refined a method by which SDs could be scaled for broader application.
Now, by building a diode bridge circuit, they demonstrated the successful integration of four SDs and realized AC-to-DC rectification at cryogenic temperatures.
The new approach described in their recent Nature Electronics paper will significantly cut down on the thermal and electromagnetic noise traveling from ambient into cryogenic circuitry, enabling cleaner operation. The SDs could also potentially serve as isolators/circulators, assisting in insulating qubit signals from external influence. The successful assimilation of multiple SDs into the first integrated SD circuit represents a key step toward making superconducting computing a commercial reality.
“Our work opens the door to the arrival of highly energy-efficient, practical superconductivity-based supercomputers in the next few years,” says Moodera. “Moreover, we expect our research to enhance the qubit stability while boosting the quantum computing program, bringing its realization closer." Given the multiple beneficial roles these components could play, Moodera and his team are already working toward the integration of such devices into actual superconducting logic circuits, including in dark matter detection circuits that are essential to the operation of experiments at CERN and LUX-ZEPLIN in at the Berkeley National Lab.
This work was partially funded by MIT Lincoln Laboratory’s Advanced Concepts Committee, the U.S. National Science Foundation, U.S. Army Research Office, and U.S. Air Force Office of Scientific Research.
This work was carried out, in part, through the use of MIT.nano’s facilities.
© Photo: AdobeStock
Cambridge researchers awarded Advanced Grants from the European Research Council

The successful Cambridge grantees’ work covers a range of research areas, including the development of next-generation semiconductors, new methods to identify dyslexia in young children, how diseases spread between humans and animals, and the early changes that happen in cells before breast cancer develops, with the goal of finding ways to stop the disease before it starts.
The funding, worth €721 million in total, will go to 281 leading researchers across Europe. The Advanced Grant competition is one of the most prestigious and competitive funding schemes in the EU and associated countries, including the UK. It gives senior researchers the opportunity to pursue ambitious, curiosity-driven projects that could lead to major scientific breakthroughs. Advanced Grants may be awarded up to € 2.5 million for a period of five years. The grants are part of the EU’s Horizon Europe programme. The UK agreed a deal to associate to Horizon Europe in September 2023.
This competition attracted 2,534 proposals, which were reviewed by panels of internationally renowned researchers. Over 11% of proposals were selected for funding. Estimates show that the grants will create approximately 2,700 jobs in the teams of new grantees. The new grantees will be based at universities and research centres in 23 EU Member States and associated countries, notably in the UK (56 grants), Germany (35), Italy (25), the Netherlands (24), and France (23).
“Many congratulations to our Cambridge colleagues on these prestigious ERC funding awards,” said Professor Sir John Aston, Cambridge’s Pro-Vice-Chancellor for Research. “This type of long-term funding is invaluable, allowing senior researchers the time and space to develop potential solutions for some of biggest challenges we face. We are so fortunate at Cambridge to have so many world-leading researchers across a range of disciplines, and I look forward to seeing the outcomes of their work.”
The Cambridge recipients of 2025 Advanced Grants are:
Professor Clare Bryant (Department of Veterinary Medicine) for investigating human and avian pattern recognition receptor activation of cell death pathways, and the impact on the host inflammatory response to zoonotic infections.
Professor Sir Richard Friend (Cavendish Laboratory/St John’s College) for bright high-spin molecular semiconductors.
Professor Usha Goswami (Department of Psychology/St John’s College) for a cross-language approach to the early identification of dyslexia and developmental language disorder using speech production measures with children.
Professor Regina Grafe (Faculty of History) for colonial credit and financial diversity in the Global South: Spanish America 1600-1820.
Professor Judy Hirst (MRC Mitochondrial Biology Unit/Corpus Christi College) for the energy-converting mechanism of a modular biomachine: Uniting structure and function to establish the engineering principles of respiratory complex I.
Professor Matthew Juniper (Department of Engineering/Trinity College) for adjoint-accelerated inference and optimisation methods.
Professor Walid Khaled (Department of Pharmacology/Magdalene College) for understanding precancerous changes in breast cancer for the development of therapeutic interceptions.
Professor Adrian Liston (Department of Pathology/St Catharine’s College) for dissecting the code for regulatory T cell entry into the tissues and differentiation into tissue-resident cells.
Professor Róisín Owens (Department of Chemical Engineering and Biotechnology/Newnham College) for conformal organic devices for electronic brain-gut readout and characterisation.
Professor Emma Rawlins (Department of Physiology, Development and Neuroscience/Gurdon Institute) for reprogramming lung epithelial cell lineages for regeneration.
Dr Marta Zlatic (Department of Zoology/Trinity College) for discovering the circuit and molecular basis of inter-strain and inter-species differences in learning
“These ERC grants are our commitment to making Europe the world’s hub for excellent research,” said Ekaterina Zaharieva, European Commissioner for Startups, Research, and Innovation. “By supporting projects that have the potential to redefine whole fields, we are not just investing in science but in the future prosperity and resilience of our continent. In the next competition rounds, scientists moving to Europe will receive even greater support in setting up their labs and research teams here. This is part of our “Choose Europe for Science” initiative, designed to attract and retain the world’s top scientists.”
“Much of this pioneering research will contribute to solving some of the most pressing challenges we face - social, economic and environmental,” said Professor Maria Leptin, President of the European Research Council. “Yet again, many scientists - around 260 - with ground-breaking ideas were rated as excellent, but remained unfunded due to a lack of funds at the ERC. We hope that more funding will be available in the future to support even more creative researchers in pursuing their scientific curiosity.”
Eleven senior researchers at the University of Cambridge have been awarded Advanced Grants from the European Research Council – the highest number of grants awarded to any institution in this latest funding round.
The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.
Back in the running with ERC grants
Evolution made us cheats, now free-riders run the world and we need to change, new book warns

In Invisible Rivals, published by Yale University Press on 17 June, Dr Goodman argues that throughout human history we have tried to rid our social groups of free-riders, people who take from others without giving anything back. But instead of eliminating free-riders, human evolution has just made them better at hiding their deception.
Goodman explains that humans have evolved to use language to disguise selfish acts and exploit our cooperative systems. He links this ‘invisible rivalry’ to the collapse of trust and consequent success of political strongmen today.
Goodman says: “We see this happening today, as evidenced by the rise of the Julius Caesar of our time—Donald Trump— but it is a situation that evolution has predicted since the origins of life and later, language, and which will only change form again even if the current crises are overcome.”
Goodman argues that over the course of human evolution “When we rid ourselves of ancient, dominant alphas, we traded overt selfishness for something perhaps even darker: the ability to move through society while planning and coordinating.”
“As much as we evolved to use language effectively to work together, to overthrow those brutish and nasty dominants that pervaded ancient society, we also (and do) use language to create opportunities that benefit us … We use language to keep our plans invisible. Humans, more than other known organisms, can cooperate until we imagine a way to compete, exploit, or coerce, and almost always rely on language to do so.”
Goodman, an expert on human social evolution at the University of Cambridge, identifies free-riding behaviour in everything from benefits cheating and tax evasion, to countries dodging action on climate change, and the actions of business leaders and politicians.
Goodman warns that “We can’t stop people free-riding, it’s part of our nature, the incurable syndrome… Free riders are among us at every level of society and pretending otherwise can make our own goals unrealistic, and worse, appear hopeless. But if we accept that we all have this ancient flaw, this ability to deceive ourselves and others, we can design policies around that and change our societies for the better.”
Lessons from our ancestors
Goodman points out that humans evolved in small groups meaning that over many generations we managed to design social norms to govern the distribution of food, water and other vital resources.
“People vied for power but these social norms helped to maintain a trend toward equality, balancing out our more selfish dispositions. Nevertheless, the free-rider problem persisted and using language we got better at hiding our cheating.”
One academic camp has argued that ancient humans used language to work together to overthrow and eject “brutish dominants”. The opposing view claims that this never happened and that humans are inherently selfish and tribal. Goodman rejects both extremes.
“If we accept the view that humans are fundamentally cooperative, we risk trusting blindly. If we believe everyone is selfish, we won’t trust anyone. We need to be realistic about human nature. We’re a bit of both so we need to learn how to place our trust discerningly.”
Goodman points out that our distant ancestors benefitted from risk-pooling systems, whereby all group members contributed labour and shared resources, but this only worked because it is difficult to hide tangible assets such as tools and food. While some hunter-gatherer societies continue to rely on these systems, they are ineffective in most modern societies in our globalized economy.
“Today most of us rely largely on intangible assets for monetary exchange so people can easily hide resources, misrepresent their means and invalidate the effectiveness of social norms around risk pooling,” Goodman says.
“We are flawed animals capable of deception, cruelty, and selfishness. The truth is hard to live with but confronting it through honest reflection about our evolutionary past gives us the tools to teach ourselves and others about how we can improve the future.”
Taking action: self-knowledge, education and policy
Goodman, who teaches students at Cambridge about the evolution of cooperation, argues that we reward liars from a young age and that this reinforces bad behaviour into adulthood.
“People tell children that cheaters don’t prosper, but in fact cheats who don’t get caught can do very well for themselves.”
“Evolutionarily speaking, appearing trustworthy but being selfish can be more beneficial to the individual. We need to recognise that and make a moral choice about whether we try to use people or to work with them.”
At the same time, Goodman thinks we need to arm ourselves intellectually with the power to tell who is credible and who is not. “Our most important tool for doing this is education,” he says. “We must teach people to think ethically for themselves, and to give them the tools to do so.”
But Goodman cautions that even the tools we use to expose exploiters are open to exploitation: “Think about how people across the political sphere accuse others of virtue signalling or abusing a well-intentioned political movement for their own gain.”
Goodman believes that exposing free-riders is more beneficial than punishment. “Loss of social capital through reputation is an important motivator for anyone,” he argues, suggesting that journalistic work exposing exploitation can be as effective at driving behaviour change as criminal punishment.
“The dilemma each of us faces now is whether to confront invisible rivalry or to let exploiters undermine society until democracy in the free world unravels—and the freedom of dissent is gone.”
Dr Jonathan R Goodman is a research associate at Cambridge Public Health and a social scientist at the Wellcome Sanger Institute.
Invisible Rivals: How We Evolved to Compete in a Cooperative World was published by Yale University Press on 17 June 2025 (ISBN: 9780300274356)
To save democracy and solve the world's biggest challenges, we need to get better at spotting and exposing people who exploit human cooperation for personal gain, argues Cambridge social scientist Dr Jonathan Goodman.
The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.
What makes ETH graduates so sought after?
“It’s not enough simply to improve the system we have”
Learning from each other
On equal terms with the CTO
“It’s about finding the courage to take the first step”
“I was fascinated by the combination of conflict research and quantitative analysis”
Innovation needs freedom
Making farming more sustainable
A degree for the quantum age
A walk-in installation
A brief history of the global economy, through the lens of a single barge
In 1989, New York City opened a new jail. But not on dry land. The city leased a barge, then called the “Bibby Resolution,” which had been topped with five stories of containers made into housing, and anchored it in the East River. For five years, the vessel lodged inmates.
A floating detention center is a curiosity. But then, the entire history of this barge is curious. Built in 1979 in Sweden, it housed British troops during the Falkland Islands war with Argentina, became worker housing for Volkswagen employees in West Germany, got sent to New York, also became a detention center off the coast of England, then finally was deployed as oil worker housing off the coast of Nigeria. The barge has had nine names, several owners, and flown the flags of five countries.
In this one vessel, then, we can see many currents: globalization, the transience of economic activity, and the hazy world of transactions many analysts and observers call “the offshore,” the lightly regulated sphere of economic activity that encourages short-term actions.
“The offshore presents a quick and potentially cheap solution to a crisis,” says MIT lecturer Ian Kumekawa. “It is not a durable solution. The story of the barge is the story of it being used as a quick fix in all sorts of crises. Then these expediences become the norm, and people get used to them and have an expectation that this is the way the world works.”
Now Kumekawa, a historian who started teaching as a lecturer at MIT earlier this year, explores the ship’s entire history in “Empty Vessel: The Global Economy in One Barge,” just published by Knopf and John Murray. In it, he traces the barge’s trajectory and the many economic and geopolitical changes that helped create the ship’s distinctive deployments around the world.
“The book is about a barge, but it’s also about the developing, emerging offshore world, where you see these layers of globalization, financialization, privatization, and the dissolution of territoriality and orders,” Kumekawa says. “The barge is a vehicle through which I can tell the story of those layers together.”
“Never meant to be permanent”
Kumekawa first found out about the vessel several years ago; New York City obtained another floating detention center in the 1990s, which prompted Kumekawa to start looking into the past of the older jail ship, the former “Bibby Resolution,” from the 1990s. The more he found out about its distinctive past, the more curious he became.
“You start pulling on a thread, and you realize you can keep pulling,” Kumekawa says.
The barge Kumekawa follows in the book was built in Sweden in 1979 as the “Balder Scapa.” Even then, commerce was plenty globalized: The vessel was commissioned by a Norwegian shell company, with negotiations run by an expatriate Swedish shipping agent whose firm was registered in Panama and used a Miami bank.
The barge was built at an inflection point following the economic slowdown and oil shocks of the 1970s. Manufacturing was on the verge of declining in both Western Europe and the U.S.; about half as many people now work in manufacturing in those regions, compared to 1960. Companies were looking to find cheaper global locations for production, reinforcing the sense that economic activity was now less durable in any given place.
The barge became part of this transience. The five-story accommodation block was added in the early 1980s; in 1983 it was re-registered in the UK and sent to the Falkland Islands as a troop accommodation named the “COASTEL 3.” Then it was re-registered in the Bahamas and sent to Emden, West Germany, as housing for Volkswagen workers. The vessel then served its stints as inmate housing — first in New York, then off the coast of England from 1997 to 2005. By 2010, it had been re-re-re-registered, in St. Vincent and Grenadines, and was housing oil workers off the coast of Nigeria.
“Globalization is more about flow than about stocks, and the barge is a great example of that,” Kumekawa says. “It’s always on the move, and never meant to be a permanent container. It’s understood people are going to be passing through.”
As Kumekawa explores in the book, this sense of social dislocation overlapped with the shrinking of state capacity, as many states increasingly encouraged companies to pursue globalized production and lightly regulated financial activities in numerous jurisdictions, in the hope it would enhance growth. And it has, albeit with unresolved questions about who the benefits accrue to, the social dislocation of workers, and more.
“In a certain sense it’s not an erosion of state power at all,” Kumekawa says. “These states are making very active choices to use offshore tools, to circumvent certain roadblocks.” He adds: “What happens in the 1970s and certainly in the 1980s is that the offshore comes into its own as an entity, and didn’t exist in the same way even in the 1950s and 1960s. There’s a money interest in that, and there’s a political interest as well.”
Abstract forces, real materials and people
Kumekawa is a scholar with a strong interest in economic history; his previous book, “The First Serious Optimist: A.C. Pigou and the Birth of Welfare Economics,” was published in 2017. This coming fall, Kumekawa will be team-teaching a class on the relationship between economics and history, along with MIT economists Abhijit Banerjee and Jacob Moscona.
Working on “Empty Vessel” also necessitated that Kumekawa use a variety of research techniques, from archival work to journalistic interviews with people who knew the vessel well.
“I had a wonderful set of conversations with the man who was the last bargemaster,” Kumekawa says. “He was the person in effect steering the vessel for many years. He was so aware of all of the forces at play — the market for oil, the prices of accommodations, the regulations, the fact no one had reinforced the frame.”
“Empty Vessel” has already received critical acclaim. Reviewing it in The New York Times, Jennifer Szalai writes that this “elegant and enlightening book is an impressive feat.”
For his part, Kumekawa also took inspiration from a variety of writings about ships, voyages, commerce, and exploration, recognizing that these vessels contain stories and vignettes that illuminate the wider world.
“Ships work very well as devices connecting the global and the local,” he says. Using the barge as the organizing principle of his book, Kumekawa adds, “makes a whole bunch of abstract processes very concrete. The offshore itself is an abstraction, but it’s also entirely dependent on physical infrastructure and physical places. My hope for the book is it reinforces the material dimension of these abstract global forces.”
© Image: Courtesy of Penguin Random House, and Ian Kumekawa
Students and staff work together for MIT’s first “No Mow May”
In recent years, some grass lawns around the country have grown a little taller in springtime thanks to No Mow May, a movement originally launched by U.K. nonprofit Plantlife in 2019 designed to raise awareness about the ecological impacts of the traditional, resource-intensive, manicured grass lawn. No Mow May encourages people to skip spring mowing to allow for grass to grow tall and provide food and shelter for beneficial creatures including bees, beetles, and other pollinators.
This year, MIT took part in the practice for the first time, with portions of the Kendall/MIT Open Space, Bexley Garden, and the Tang Courtyard forgoing mowing from May 1 through June 6 to make space for local pollinators, decrease water use, and encourage new thinking about the traditional lawn. MIT’s first No Mow May was the result of championing by the Graduate Student Council Sustainability Subcommittee (GSC Sustain) and made possible by the Office of the Vice Provost for Campus Space Management and Planning.
A student idea sprouts
Despite being a dense urban campus, MIT has no shortage of green spaces — from pocket gardens and community-managed vegetable plots to thousands of shade trees — and interest in these spaces continues to grow. In recent years, student-led initiatives supported by Institute leadership and operational staff have transformed portions of campus by increasing the number of native pollinator plants and expanding community gardens, like the Hive Garden. With No Mow May, these efforts stepped out of the garden and into MIT’s many grassy open spaces.
“The idea behind it was to raise awareness for more sustainable and earth-friendly lawn practices,” explains Gianmarco Terrones, GSC Sustain member. Those practices include reducing the burden of mowing, limiting use of fertilizers, and providing shelter and food for pollinators. “The insects that live in these spaces are incredibly important in terms of pollination, but they’re also part of the food chain for a lot of animals,” says Terrones.
Research has shown that holding off on mowing in spring, even in small swaths of green space, can have an impact. The early months of spring have the lowest number of flowers in regions like New England, and providing a resource and refuge — even for a short duration — can support fragile pollinators like bees. Additionally, No Mow May aims to help people rethink their yards and practices, which are not always beneficial for local ecosystems.
Signage at each No Mow site on campus highlighted information on local pollinators, the impact of the project, and questions for visitors to ask themselves. “Having an active sign there to tell people, ‘look around. How many butterflies do you see after six weeks of not mowing? Do you see more? Do you see more bees?’ can cause subtle shifts in people’s awareness of ecosystems,” says GSC Sustain member Mingrou Xie. A mowed barrier around each project also helped visitors know that areas of tall grass at No Mow sites are intentional.
Campus partners bring sustainable practices to life
To make MIT’s No Mow May possible, GSC Sustain members worked with the Office of the Vice Provost and the Open Space Working Group, co-chaired by Vice Provost for Campus Space Management and Planning Brent Ryan and Director of Sustainability Julie Newman. The Working Group, which also includes staff from Open Space Programming, Campus Planning, and faculty in the School of Architecture and Planning, helped to identify potential No Mow locations and develop strategies for educational signage and any needed maintenance. “Massachusetts is a biodiverse state, and No Mow May provides an exciting opportunity for MIT to support that biodiversity on its own campus,” says Ryan.
Students were eager for space on campus with high visibility, and the chosen locations of the Kendall/MIT Open Space, Bexley Garden, and the Tang Courtyard fit the bill. “We wanted to set an example and empower the community to feel like they can make a positive change to an environment they spend so much time in,” says Xie.
For GSC Sustain, that positive change also takes the form of the Native Plant Project, which they launched in 2022 to increase the number of Massachusetts-native pollinator plants on campus — plants like swamp milkweed, zigzag goldenrod, big leaf aster, and red columbine, with which native pollinators have co-evolved. Partnering with the Open Space Working Group, GSC Sustain is currently focused on two locations for new native plant gardens — the President’s Garden and the terrace gardens at the E37 Graduate Residence. “Our short-term goal is to increase the number of native [plants] on campus, but long term we want to foster a community of students and staff interested in supporting sustainable urban gardening,” says Xie.
Campus as a test bed continues to grow
After just a few weeks of growing, the campus No Mow May locations sprouted buttercups, mouse ear chickweed, and small tree saplings, highlighting the diversity waiting dormant in the average lawn. Terrones also notes other discoveries: “It’s been exciting to see how much the grass has sprung up these last few weeks. I thought the grass would all grow at the same rate, but as May has gone on the variations in grass height have become more apparent, leading to non-uniform lawns with a clearly unmanicured feel,” he says. “We hope that members of MIT noticed how these lawns have evolved over the span of a few weeks and are inspired to implement more earth-friendly lawn practices in their own homes/spaces.”
No Mow May and the Native Plant Project fit into MIT’s overall focus on creating resilient ecosystems that support and protect the MIT community and the beneficial critters that call it home. MIT Grounds Services has long included native plants in the mix of what is grown on campus and native pollinator gardens, like the Hive Garden, have been developed and cared for through partnerships with students and Grounds Services in recent years. Grounds, along with consultants that design and install our campus landscape projects, strive to select plants that assist us with meeting sustainability goals, like helping with stormwater runoff and cooling. No Mow May can provide one more data point for the iterative process of choosing the best plants and practices for a unique microclimate like the MIT campus.
“We are always looking for new ways to use our campus as a test bed for sustainability,” says Director of Sustainability Julie Newman. “Community-led projects like No Mow May help us to learn more about our campus and share those lessons with the larger community.”
The Office of the Vice Provost, the Open Space Working Group, and GSC Sustain will plan to reconnect in the fall for a formal debrief of the project and its success. Given the positive community feedback, future possibilities of expanding or extending No Mow May will be discussed.
© Photo: Gianmarco Terrones
Professor Emeritus Hank Smith honored for pioneering work in nanofabrication
Nanostructures are a stunning array of intricate patterns that are imperceptible to the human eye, yet they help power modern life. They are the building blocks of microchip transistors, etched onto grating substrates of space-based X-ray telescopes, and drive innovations in medicine, sustainability, and quantum computing.
Since the 1970s, Henry “Hank” Smith, MIT professor emeritus of electrical engineering, has been a leading force in this field. He pioneered the use of proximity X-ray lithography, proving that X-rays’ short optical wavelength could produce high-resolution patterns at the nanometer scale. Smith also made significant advancements in phase-shifting masks (PSMs), a technique that disrupts light waves to enhance contrast. His design of attenuated PSMs, which he co-created with graduate students Mark Schattenburg PhD ʼ84 and Erik H. Anderson ʼ81, SM ʼ84, PhD ʼ88, is still used today in the semiconductor industry.
In recognition of these contributions, as well as highly influential achievements in liquid-immersion lithography, achromatic-interference lithography, and zone-plate array lithography, Smith recently received the 2025 SPIE Frits Zernike Award for Microlithography. Given by the Society of Photo-Optical Instrumentation Engineers (SPIE), the accolade recognizes scientists for their outstanding accomplishments in microlithographic technology.
“The Zernike Award is an impressive honor that aptly recognizes Hank’s pioneering contributions,” says Karl Berggren, MIT’s Joseph F. and Nancy P. Keithley Professor in Electrical Engineering and faculty head of electrical engineering. “Whether it was in the classroom, at a research conference, or in the lab, Hank approached his work with a high level of scientific rigor that helped make him decades ahead of industry practices.”
Now 88 years old, Smith has garnered many other honors. He was also awarded the SPIE BACUS Prize, named a member of the National Academy of Engineering, and is a fellow of the American Academy of Arts and Sciences, IEEE, the National Academy of Inventors, and the International Society for Nanomanufacturing.
Jump-starting the nano frontier
From an early age, Smith was fascinated by the world around him. He took apart clocks to see how they worked, explored the outdoors, and even observed the movement of water. After graduating from high school in New Jersey, Smith majored in physics at College of the Holy Cross. From there, he pursued his doctorate at Boston College and served three years as an officer in the U.S. Air Force.
It was his job at MIT Lincoln Laboratory that ultimately changed Smith’s career trajectory. There, he met visitors from MIT and Harvard University who shared their big ideas for electronic and surface acoustic wave devices but were stymied by the physical limitations of fabrication. Yet, few were inclined to tackle this challenge.
“The job of making things was usually brushed off the table with, ‘oh well, we’ll get some technicians to do that,’” Smith said in his oral history for the Center for Nanotechnology in Society. “And the intellectual content of fabrication technology was not appreciated by people who had been ‘traditionally educated,’ I guess.”
More interested in solving problems than maintaining academic rank, Smith set out to understand the science of fabrication. His breakthrough in X-ray lithography signaled to the world the potential and possibilities of working on the nanometer scale, says Schattenburg, who is a senior research scientist at MIT Kavli Institute for Astrophysics and Space Research.
“His early work proved to people at MIT and researchers across the country that nanofabrication had some merit,” Schattenburg says. “By showing what was possible, Hank really jump-started the nano frontier.”
Cracking open lithography’s black box
By 1980, Smith left Lincoln Lab for MIT’s main campus and continued to push forward new ideas in his NanoStructures Laboratory (NSL), formerly the Submicron Structures Laboratory. NSL served as both a research lab and a service shop that provided optical gratings, which are pieces of glass engraved with sub-micron periodic patterns, to the MIT community and outside scientists. It was a busy time for the lab; NSL attracted graduate students and international visitors. Still, Smith and his staff ensured that anyone visiting NSL would also receive a primer on nanotechnology.
“Hank never wanted anything we produced to be treated as a black box,” says Mark Mondol, MIT.nano e-beam lithography domain expert who spent 23 years working with Smith in NSL. “Hank was always very keen on people understanding our work and how it happens, and he was the perfect person to explain it because he talked in very clear and basic terms.”
The physical NSL space in MIT Building 39 shuttered in 2023, a decade after Smith became an emeritus faculty member. NSL’s knowledgeable staff and unique capabilities transferred to MIT.nano, which now serves as MIT’s central hub for supporting nanoscience and nanotechnology advancements. Unstoppable, Smith continues to contribute his wisdom to the ever-expanding nano community by giving talks at the NSL Community Meetings at MIT.nano focused on lithography, nanofabrication, and their future.
Smith’s career is far from complete. Through his startup LumArray, Smith continues to push the boundaries of knowledge. He recently devised a maskless lithography method, known as X-ray Maskless Lithography (XML), that has the potential to lower manufacturing costs of microchips and thwart the sale of counterfeit microchips.
Dimitri Antoniadis, MIT professor emeritus of electrical engineering and computer science, is Smith’s longtime collaborator and friend. According to him, Smith’s commitment to research is practically unheard-of.
“Once professors reach emeritus status, we usually inspire and supervise research,” Antoniadis says. “It’s very rare for retired professors to do all the work themselves, but he loves it.”
Enduring influence
Smith’s legacy extends far beyond the groundbreaking tools and techniques he pioneered, say his friends, colleagues, and former students. His relentless curiosity and commitment to his graduate students helped propel his field forward.
He earned a reputation for sitting in the front row at research conferences, ready to ask the first question. Fellow researchers sometimes dreaded seeing him there.
“Hank kept us honest,” Berggren says. “Scientists and engineers knew that they couldn’t make a claim that was a little too strong, or use data that didn’t support the hypothesis, because Hank would hold them accountable.”
Smith never saw himself as playing the good cop or bad cop — he was simply a curious learner unafraid to look foolish.
“There are famous people, Nobel Prize winners, that will sit through research presentations and not have a clue as to what’s going on,” Smith says. “That is an utter waste of time. If I don’t understand something, I’m going to ask a question.”
As an advisor, Smith held his graduate students to high standards. If they came unprepared or lacked understanding of their research, he would challenge them with tough, unrelenting questions. Yet, he was also their biggest advocate, helping students such as Lisa Su SB/SM ʼ91, PhD ʼ94, who is now the chair and chief executive officer of AMD, and Dario Gil PhD ʼ03, who is now the chair of the National Science Board and senior vice president and director of research at IBM, succeed in the lab and beyond.
Research Specialist James Daley has spent nearly three decades at MIT, most of them working with Smith. In that time, he has seen hundreds of advisees graduate and return to offer their thanks. “Hank’s former students are all over the world,” Daley says. “Many are now professors mentoring their own graduate students and bringing with them some of Hank’s style. They are his greatest legacy.”
© Photo: Eric Levin
Celebrating an academic-industry collaboration to advance vehicle technology
On May 6, MIT AgeLab’s Advanced Vehicle Technology (AVT) Consortium, part of the MIT Center for Transportation and Logistics, celebrated 10 years of its global academic-industry collaboration. AVT was founded with the aim of developing new data that contribute to automotive manufacturers, suppliers, and insurers’ real-world understanding of how drivers use and respond to increasingly sophisticated vehicle technologies, such as assistive and automated driving, while accelerating the applied insight needed to advance design and development. The celebration event brought together stakeholders from across the industry for a set of keynote addresses and panel discussions on critical topics significant to the industry and its future, including artificial intelligence, automotive technology, collision repair, consumer behavior, sustainability, vehicle safety policy, and global competitiveness.
Bryan Reimer, founder and co-director of the AVT Consortium, opened the event by remarking that over the decade AVT has collected hundreds of terabytes of data, presented and discussed research with its over 25 member organizations, supported members’ strategic and policy initiatives, published select outcomes, and built AVT into a global influencer with tremendous impact in the automotive industry. He noted that current opportunities and challenges for the industry include distracted driving, a lack of consumer trust and concerns around transparency in assistive and automated driving features, and high consumer expectations for vehicle technology, safety, and affordability. How will industry respond? Major players in attendance weighed in.
In a powerful exchange on vehicle safety regulation, John Bozzella, president and CEO of the Alliance for Automotive Innovation, and Mark Rosekind, former chief safety innovation officer of Zoox, former administrator of the National Highway Traffic Safety Administration, and former member of the National Transportation Safety Board, challenged industry and government to adopt a more strategic, data-driven, and collaborative approach to safety. They asserted that regulation must evolve alongside innovation, not lag behind it by decades. Appealing to the automakers in attendance, Bozzella cited the success of voluntary commitments on automatic emergency braking as a model for future progress. “That’s a way to do something important and impactful ahead of regulation.” They advocated for shared data platforms, anonymous reporting, and a common regulatory vision that sets safety baselines while allowing room for experimentation. The 40,000 annual road fatalities demand urgency — what’s needed is a move away from tactical fixes and toward a systemic safety strategy. “Safety delayed is safety denied,” Rosekind stated. “Tell me how you’re going to improve safety. Let’s be explicit.”
Drawing inspiration from aviation’s exemplary safety record, Kathy Abbott, chief scientific and technical advisor for the Federal Aviation Administration, pointed to a culture of rigorous regulation, continuous improvement, and cross-sectoral data sharing. Aviation’s model, built on highly trained personnel and strict predictability standards, contrasts sharply with the fragmented approach in the automotive industry. The keynote emphasized that a foundation of safety culture — one that recognizes that technological ability alone isn’t justification for deployment — must guide the auto industry forward. Just as aviation doesn’t equate absence of failure with success, vehicle safety must be measured holistically and proactively.
With assistive and automated driving top of mind in the industry, Pete Bigelow of Automotive News offered a pragmatic diagnosis. With companies like Ford and Volkswagen stepping back from full autonomy projects like Argo AI, the industry is now focused on Level 2 and 3 technologies, which refer to assisted and automated driving, respectively. Tesla, GM, and Mercedes are experimenting with subscription models for driver assistance systems, yet consumer confusion remains high. JD Power reports that many drivers do not grasp the differences between L2 and L2+, or whether these technologies offer safety or convenience features. Safety benefits have yet to manifest in reduced traffic deaths, which have risen by 20 percent since 2020. The recurring challenge: L3 systems demand that human drivers take over during technical difficulties, despite driver disengagement being their primary benefit, potentially worsening outcomes. Bigelow cited a quote from Bryan Reimer as one of the best he’s received in his career: “Level 3 systems are an engineer’s dream and a plaintiff attorney’s next yacht,” highlighting the legal and design complexity of systems that demand handoffs between machine and human.
In terms of the impact of AI on the automotive industry, Mauricio Muñoz, senior research engineer at AI Sweden, underscored that despite AI’s transformative potential, the automotive industry cannot rely on general AI megatrends to solve domain-specific challenges. While landmark achievements like AlphaFold demonstrate AI’s prowess, automotive applications require domain expertise, data sovereignty, and targeted collaboration. Energy constraints, data firewalls, and the high costs of AI infrastructure all pose limitations, making it critical that companies fund purpose-driven research that can reduce costs and improve implementation fidelity. Muñoz warned that while excitement abounds — with some predicting artificial superintelligence by 2028 — real progress demands organizational alignment and a deep understanding of the automotive context, not just computational power.
Turning the focus to consumers, a collision repair panel drawing Richard Billyeald from Thatcham Research, Hami Ebrahimi from Caliber Collision, and Mike Nelson from Nelson Law explored the unintended consequences of vehicle technology advances: spiraling repair costs, labor shortages, and a lack of repairability standards. Panelists warned that even minor repairs for advanced vehicles now require costly and complex sensor recalibrations — compounded by inconsistent manufacturer guidance and no clear consumer alerts when systems are out of calibration. The panel called for greater standardization, consumer education, and repair-friendly design. As insurance premiums climb and more people forgo insurance claims, the lack of coordination between automakers, regulators, and service providers threatens consumer safety and undermines trust. The group warned that until Level 2 systems function reliably and affordably, moving toward Level 3 autonomy is premature and risky.
While the repair panel emphasized today’s urgent challenges, other speakers looked to the future. Honda’s Ryan Harty, for example, highlighted the company’s aggressive push toward sustainability and safety. Honda aims for zero environmental impact and zero traffic fatalities, with plans to be 100 percent electric by 2040 and to lead in energy storage and clean power integration. The company has developed tools to coach young drivers and is investing in charging infrastructure, grid-aware battery usage, and green hydrogen storage. “What consumers buy in the market dictates what the manufacturers make,” Harty noted, underscoring the importance of aligning product strategy with user demand and environmental responsibility. He stressed that manufacturers can only decarbonize as fast as the industry allows, and emphasized the need to shift from cost-based to life-cycle-based product strategies.
Finally, a panel involving Laura Chace of ITS America, Jon Demerly of Qualcomm, Brad Stertz of Audi/VW Group, and Anant Thaker of Aptiv covered the near-, mid-, and long-term future of vehicle technology. Panelists emphasized that consumer expectations, infrastructure investment, and regulatory modernization must evolve together. Despite record bicycle fatality rates and persistent distracted driving, features like school bus detection and stop sign alerts remain underutilized due to skepticism and cost. Panelists stressed that we must design systems for proactive safety rather than reactive response. The slow integration of digital infrastructure — sensors, edge computing, data analytics — stems not only from technical hurdles, but procurement and policy challenges as well.
Reimer concluded the event by urging industry leaders to re-center the consumer in all conversations — from affordability to maintenance and repair. With the rising costs of ownership, growing gaps in trust in technology, and misalignment between innovation and consumer value, the future of mobility depends on rebuilding trust and reshaping industry economics. He called for global collaboration, greater standardization, and transparent innovation that consumers can understand and afford. He highlighted that global competitiveness and public safety both hang in the balance. As Reimer noted, “success will come through partnerships” — between industry, academia, and government — that work toward shared investment, cultural change, and a collective willingness to prioritize the public good.
© Photo: Kelly Davidson Studio
Brainwashing? Like ‘The Manchurian Candidate’?

Rebecca Lemov.
Veasey Conway/Harvard Staff Photographer
Brainwashing? Like ‘The Manchurian Candidate’?
More than vestige of Cold War, mind-control techniques remain with us in social media, cults, AI, elsewhere, new book argues
Liz Mineo
Harvard Staff Writer
Brainwashing is often viewed as a Cold War relic — think ’60s films like “The Manchurian Candidate” and “The IPCRESS File.”
But Rebecca Lemov, professor of the history of science, argues in her recently released book “The Instability of Truth: Brainwashing, Mind Control, and Hyper-Persuasion” that it still persists. Elements of coercion and persuasion, components of mind and behavior control are used in cults, social media, AI, and even crypto culture, she said.
In this edited interview, Lemov talks about the history of brainwashing, why it endures, and how it works.
What is the common thread among brainwashing, mind control, and hyper-persuasion?

They’re all related. Brainwashing gets the most attention because it is the most dramatic and grabs headlines.
The concept attracted me 20 years ago when I set out to do my dissertation research. Having studied behavioral engineering, brainwashing seemed to me like the most extreme form of engineering someone to do something or think something different than what they might otherwise do.
Mind control is a synonym, but it has more of an emphasis on technology. I invented the word hyper-persuasion to describe a highly targeted set of techniques that can exist in our modern media environment. The common thread among them is one of coercion combined with persuasion.
You write that Korean War POWs in the early 1950s brought the concept of brainwashing home to the U.S. Did brainwashing exist before that?
Before the Korean War, there were incidents that certainly we could call brainwashing, going back to the ancient Greeks and certain cultic mysteries and transformations that were enacted in circumstances of coercion mixed with persuasion.
You could jump forward to the 1930s, to the “show trials” in Moscow where political enemies would be confessing to terrible crimes, or the 1940s, when Cardinal Mindszenty, a Hungarian war hero, who, after having been arrested and imprisoned by the communist police, confessed to crimes against the Hungarian people and the church. He didn’t seem like himself, and it seemed that something had been done to him.
Mindszenty later described that he had been subjected to sleep deprivation, had potentially been drugged, and he said this famous line, which came to represent brainwashing, “Without knowing what had happened to me, I had become a different person.”
With the Korean War, U.S. Air Force POWs came forward with confessions that they had dropped secret germ warfare over China and Korea, and they looked like Mindszenty had looked, in a sort of some hypnotic trance. All of this is depicted in the 1962 movie “The Manchurian Candidate.”
The crisis reached its peak when 21 U.S. POWs who had been held behind enemy lines declared that they would prefer not to return to the United States but rather stay in China. The then-CIA Director Allen Dulles declared that the soldiers had been converted against their will.
It was around this time that MKUltra, a secret CIA mind-control and chemical interrogation research program, was funded.
The case of heiress Patricia Hearst, who was kidnapped and brainwashed by leftist radicals in the 1970s, renewed public interest in brainwashing. Was it in fact brainwashing?
In the trial of Patty Hearst, which was called the trial of the century in 1976, four major experts who testified on her behalf said that what had been done to her was also what had been done to the POWs in the Korean War.
“That’s the paradox of brainwashing. It hides itself in plain sight.”
People had a hard time believing she had been coerced into becoming a leftist radical because she was captured on camera robbing a bank with the guerrilla group that had abducted her, but she said, “I accommodated my thoughts to coincide with theirs.” That’s the paradox of brainwashing. It hides itself in plain sight.
Some scholars argue that brainwashing doesn’t really exist, that it’s merely a hysterical response. In his book “The Captive Mind,” Polish poet Czeslaw Milosz writes that needing to accommodate your thoughts to coincide with a certain regime is to brainwash oneself. He describes how he ultimately couldn’t do it to himself, and that’s why he ended up leaving communist Poland.
In a sense, Patty Hearst, who was 19 when she was abducted and was subjected to physical abuse and indoctrination, couldn’t just pretend to be a soldier. She had to be one. And that’s brainwashing.
You argue in your book that social media, crypto, and other new technologies can produce some sort of mind control. How so?
Social media, AI companionship bots, and crypto, the culture of cryptocurrency investment, are digital environments that include a highly targeted form of emotional connectedness that often has a coercive element.
When we’re on social media, we’re constantly being exposed to messages and microenvironments, which resemble the process of brainwashing or mind control.
“When we’re on social media, we’re constantly being exposed to messages and microenvironments, which resemble the process of brainwashing or mind control.”
First, both start with a kind of ungrounding process or successive shocks. If you’re doom scrolling, you’re subjected to successive shocks, and there is a point of disorientation because we can feel overwhelmed by these algorithmically targeted pieces of information that we voluntarily expose ourselves to, but we can’t seem to stop.
Second is milieu control, which is the kind of siloing where you’re only getting controlled messages from certain sources.
That can result in what I call hyper-persuasion, which becomes a third form of brainwashing. What’s concerning is that these new technologies are targeted exactly for you. For example, AI chatbot companions may have your psychological makeup obtained from the internet or from information that you, and all of us, are giving away online.
You’ve been teaching a class on brainwashing for 20 years on and off. Why do you think students are interested in it?
There is a kind of fascination with brainwashing and mind control.
Some also may have some personal experience, like a relative was in a cult or sometimes even a personal relationship that was distressing to them. Sometimes they have questions about coercive control. How would one get into an abusive relationship? Or how do addictions feed into this? There is also a general fascination with cults.
Now students are more and more interested in social media and their use of targeted algorithms, and how the constant stream of trivial choices we all make may have a large effect.
Can anyone be susceptible to brainwashing?
There are studies of people who have been re-educated who describe that their guilt from childhood was capitalized on in the process of maybe being recruited into a cult.
We think that brainwashing has to do with being forced to believe something, or that it works at the level of cognition or ideas, but it works more at the level of emotions. This sort of tapping into the emotional layer is what we often don’t see — the way that they capitalize on unresolved trauma, which is unprocessed, extreme emotion.
“Being intelligent is not a protection against brainwashing.”
Being intelligent is not a protection against brainwashing. We shouldn’t think that only certain people are more susceptible to be brainwashed. You may think that you’re too sophisticated, but because brainwashing happens at the emotional level, there is no protection against it.
What I found helpful is to be aware of the process taking place at the emotional level. We’re getting cues all the time as we interact with social media, or with a group of people who maybe want to recruit us into their groups. It’s helpful to be mindful of the visceral cues and not simply the ideas.
Hope for sufferers of ‘invisible’ tinnitus disorder

Daniel Polley.
Photo by Dylan Goodman
Hope for sufferers of ‘invisible’ tinnitus disorder
Researchers develop way to objectively measure common malady, which may improve diagnosis, help in developing therapies
Alvin Powell
Harvard Staff
Researchers are gaining new insights into the “invisible” disorder tinnitus, whose phantom ringing, hissing, and other noises are often linked to hearing damage, but for which physicians have not had an objective measure, until now.
The advance, reported in late April in the journal Science Translational Medicine and funded by the National Institute of Deafness and Other Communication Disorders, has the potential to provide physicians and researchers with a way to gauge tinnitus severity beyond the subjective patient questionnaires in use today. In addition, it also may help develop more effective therapies.
In this edited conversation, Daniel Polley, director of the Eaton-Peabody Laboratories at Harvard-affiliated Massachusetts Eye and Ear and professor of otolaryngology head and neck surgery at Harvard Medical School, discusses research conducted with MEE colleagues that examines involuntary pupil dilation and facial movement in reaction to sound in patients with varying levels of tinnitus.
What is tinnitus? Is it more than just ringing in the ears?
Most cases of tinnitus have one thing in common: the conscious awareness of a sound that doesn’t exist in the physical environment, a phantom sound.
I have tinnitus, and it’s like a 24/7 radio broadcast — a single note — that I usually can put out of mind. But it’s always there if I want to tune into it.
“I have tinnitus, and it’s like a 24/7 radio broadcast — a single note — that I usually can put out of mind. But it’s always there if I want to tune into it.”
It’s exceedingly common, affecting about 12 percent of people. Among those 65 and older, those numbers jump to 25 percent and higher.
For most, this phantom sound is a mild nuisance, but for some it is debilitating. It’s not just an auditory problem, it’s a whole-life problem, a mental-well-being problem. Their tinnitus is not necessarily louder, because when most people match the loudness of their tinnitus to a physical sound, it is actually quite soft.
But what makes people with tinnitus disorder different is that it encroaches on systems that regulate mood and arousal level. A common complaint with severe tinnitus is that it takes longer to go to sleep and you wake up more easily.
Very often people with tinnitus disorder will have a hypersensitivity or aversion to sound. There’s high comorbidity with depression, anxiety, and social withdrawal, a spectrum of neurological and psychiatric issues that come along with it.
So, people who have real trouble, it’s not because it’s louder, but that they just can’t put it out of mind?
They can’t tune it out. Perhaps what makes the neurological signature of more severe tinnitus different than mild tinnitus is that the very systems in the brain responsible for tuning out irrelevant and uninformative things is co-opted in generating the tinnitus. That was the hypothesis that inspired the work that we did. That’s what got us going on this road.
And you believe your work could provide a way to understand this condition better, to study it better?
We need better therapies for tinnitus. That’s the top priority for the field — and for me as well. But taking shots at treatment without first laying the groundwork is unlikely to get us anywhere.
It’s not hard to claim a therapy works when success is measured only by subjective questionnaires and there’s no control for the placebo effect. To be convincing, future studies will need to show improvements in physiological signs of tinnitus distress — changes that are unlikely to come from placebo alone.
This study helps lay that groundwork. First, it offers a way to visualize different tinnitus subtypes. Second, it allows us to link those subtypes to an intervention and ask, “Did it work?” not just based on whether the patient says they feel better, but whether something objective in the body changed, too. That’s how we’ll know we’re truly making progress.
So, what’s different about tinnitus from the common cold or cancer is that, before now, we didn’t have a physiological way to identify what’s going on? It’s subjective and self-reported?
“The study provides a new way of thinking about what’s causing tinnitus. We wanted to come up with a measure that would relate to someone’s severity and not just distinguish them from someone without tinnitus.”
That’s right. It puts us back into the 19th century or 18th century. With any other neurological disorder, like epilepsy, you can measure a seizure or a stroke. With Parkinson’s, you have the neuroimaging and can do an objective measurement of motor behavior.
There aren’t many disorders that are truly hidden, where you can’t use outputs or inputs to shine light on the ghost in the machine. Chronic pain is first and foremost in that category — it’s even more common than tinnitus.
And for both of these conditions, you need an objective measure. For chronic pain, all they have is, “How bad is your pain today, on a scale of one to 10?” That’s the value of this metric: It predicts the individual severity scores that come from the questionnaire.
Can you describe the measure that you’ve documented?
The study provides a new way of thinking about what’s causing tinnitus. We wanted to come up with a measure that would relate to someone’s severity and not just distinguish them from someone without tinnitus.
We also wanted to avoid a measurement that could only be done in a specialized research hospital with expensive equipment. We want to measure these things with equipment that could feasibly wind up in a typical hearing health clinic.
Our idea is that when you or I go about our day, our brain is always surveilling the environment for possible threats so we can defend ourselves, flee, or freeze in place. Those systems are designed to get your conscious attention because you need to be aware of a possible threat.
If those systems are co-opted in the tinnitus-generation network, that would explain why you can’t put it out of mind: because you’ve incorporated the system that is designed to always elicit conscious awareness. If these networks identify a threat, they engage the sympathetic nervous system — fight, flight, or freeze — and you get, among other things, pupil dilation and increased galvanic skin response.
So, if people with severe tinnitus have their auditory threat evaluation system stuck in overdrive, then we could present emotionally evocative sounds that span a range: neutral sounds, like a typewriter; pleasant sounds, like a giggling baby; and sounds that almost everybody finds unpleasant, like an intense fit of coughing.
We expected that people with more severe tinnitus would have an overly robust response to a broad class of sounds, their sympathetic nervous system would report all of these sounds as possible threats.
How do you link that to an objective measure?
Obviously, we control our faces to communicate our emotional status, but our faces also involuntarily move to reflect our evaluation of events — pleasant or unpleasant — and our internal state of being — sad or happy. A lot of studies have examined facial movements when presented with images intended to cause happiness or fear, but nobody’s looked at facial movements when presented with sounds. We did and found that sounds do elicit facial movements.
“When we looked at people with severe tinnitus and sound sensitivity, there was a very clear difference.”
If the sound is pleasant, in a neurotypical person there’s more facial movement around the mouth. If the sound is unpleasant, you get movement in the brow, squeezing the eyes.
When we looked at people with severe tinnitus and sound sensitivity, there was a very clear difference. Their faces didn’t move. They had a blunted affect across the board, from pleasant to neutral to unpleasant. There was a diminished response to all.
Nobody’s ever measured it before. Nobody’s ever thought about the face and its connection with tinnitus. But that ended up being far and away the most informative measurement to predict an individual’s tinnitus severity.
There was a pupil response, too?
Yes, the pupil is part of the sympathetic nervous system. It’s wired into the fight, flight, or freeze system. The pupil dilates when the sympathetic nervous system is activated and, in our work, the pupil over-dilated to the sounds that the face was under-moving to.
They’re mirror images of each other. They were providing different perspectives on someone’s severity. If you use them together, you can better predict somebody’s tinnitus severity than if you used just one. The face is by far the most informative.
How could this be used as a tool?
The first FDA-approved device for tinnitus is available for prescription, but there’s controversy about how effective it is. One of the issues is that they use the same subjective questionnaires to evaluate results. Every time a tinnitus intervention is identified, people ask, “Is it placebo?”
My lab is focused on developing new therapies, so these results are an important milestone. We can incorporate them into our interventional studies. We want to migrate to a video-based system so we can make high-quality measurements faster, with less specialized equipment. We might get it into clinical use if doctors can subtype a tinnitus patient as severe or mild in their office with an objective measure.
Out of sight but not out of mind

Elena Luchkina is a research scientist in the Department of Psychology.
Photo by Grace DuVal
Out of sight but not out of mind
By 15 months, children can learn names of objects they’ve never seen, study says
Saima Sidik
Harvard Correspondent
Love, quantum mechanics, yesterday’s weather — humans readily discuss these and many other things they cannot see. Infants start to develop this ability early, new research suggests. Even 15-month-olds can define nouns without seeing their corresponding objects, according to work performed by Elena Luchkina, a research scientist in Elizabeth Spelke’s lab at the Harvard Department of Psychology, and Sandra Waxman, a professor of psychology and director of the Infant and Child Development Center at Northwestern University.
In this edited conversation with the Gazette, Luchkina discusses how she infers what infants are thinking, why her work could help treat learning difficulties, and whether the ability to discuss the unseen sets us apart in the animal kingdom.
Are humans the only animals that talk about things they can’t see?
That’s debatable, and the answer depends on who you ask. There’s evidence that great apes can communicate about things that aren’t around, but in a limited way. For example, if you show an ape an object and then rapidly hide it, they may point to the place where they’ve just seen it. Or they can request food that is not currently around them. But this is not the same as how we, humans, can communicate about absent or invisible things via language.
For example, if I describe my favorite mug, I can give you all kinds of details that aren’t obvious from its appearance, like that my sister gave it to me and that she bought it at the corner store. Scientists haven’t observed nonhuman animals communicating about hidden objects or abstract concepts in such depth.
“The capacity to represent an unseen object and learn its name might be a building block for communication about more sophisticated abstract concepts.”
But we’re not born with this ability. By their first birthdays, most kids can do what apes do — point to the former locations of things they’ve seen recently, like a ball that their parent has just hidden. This is a big leap forward. Yet, being able to refer to recently seen things is different from being able to refer to unseen or abstract things. Kids usually develop this ability by age 2, and then they start talking about things like absent caregivers and what’s going to happen tomorrow. I hope to understand how and when this capacity emerges.
How did you figure out the age at which infants can learn the meanings of new words without seeing their corresponding objects?
We’re working with children who are too young to say more than the odd word here and there, so we tracked their eye movements to infer what they know and think.
During the training portion of the experiment, we showed infants a video of an actress who looked over her shoulder and named objects that popped up on a screen behind her. For example, if an apple appeared, she’d say, “Look, it’s an apple!” She did that three times, naming three objects from a particular category, like fruits. The fourth time, the object popped up behind her body where the infant couldn’t see it. Instead of using the real name of the fruit, she used a nonsense word, like, “Look, it’s a blicket!”
Finally, during the test, a screen popped up with two objects — one was a fruit that we thought would be unfamiliar to most infants in our study, such as a dragon fruit. The other was an unrelated item such as an ottoman or a car. Then we said to the infant, “Find the blicket!,” and we tracked how long the infant looked at each object. If an infant looked at the fruit longer than the unrelated item, we inferred that they understood a blicket to be a type of fruit, even without seeing it, because the other three items were fruits.
We repeated the procedure a few times with different categories of objects, and control conditions helped us gain confidence in the results.
And what did you learn?
What was really interesting was that 15-month-olds were able to find the blicket, but not 12-month-olds. That could be because 12-month-olds don’t have the attention span or memory capacity to complete the task yet. Or 12-month-olds may not have developed the ability to form a mental image of an object without ever having seen it, whereas 15-month-olds are mature enough to do it.
When scientists tried to answer this question in the past, their research suggested that infants had to be 19-24 months old before they could attach a word to an unseen object. So we’ve found that infants have this ability at a younger age than was previously thought.
In the paper, you compare an infant’s ability to spot the blicket to an adult’s ability to discuss some pretty sophisticated concepts, like justice or the square root of negative one. What’s the connection?
It’s true — the infants won’t be discussing imaginary numbers anytime soon. But the capacity to represent an unseen object and learn its name might be a building block for communication about more sophisticated abstract concepts. Similar to adults, infants in our study are creating mental representations of things they can’t currently see and holding such representations in mind while mapping words to them.
What’s next for this research?
We’d like to know whether the infants who are best at finding the blicket at 15 months are also most able to learn from language alone at 24 months. If that’s the case, it could mean that an early ability to learn about unseen objects gives infants an important foundation for learning from language later in life.
What kind of applications might this work have?
If infants who perform better on our task at 15 months also are better at learning from language at 24 months — and that’s truly because of the ability to learn from language and not other factors like memory or attention — then the find-the-blicket task might be useful as a diagnostic tool for difficulties with learning from language. Diagnosing these problems early could give us the opportunity to design interventions that would smooth out those difficulties before they lead to trouble in school.
The research described in this story received funding from the National Institutes of Health.
Practising medicine on a lifelike silicone model
Nobel laureate David MacMillan joins Princeton’s fight against cancer
Cambridge PhD student heading to CERN for the International FameLab final

After winning a nail-biting East of England final, which was held as part of the Cambridge Festival in April 2025, Spatika went on to represent the East of England in the UK final with her presentation on Time Travel with Your Brain. She will now go on to represent the UK in the International Final taking place live at CERN Science Gateway in Switzerland to mark the 20 year anniversary of the competition.
“I was so surprised I won!”, said Spatika. “The other communicators were fantastic and we travelled through so many topics from planets to parasites and more!”.
Spatika took part in FameLab because she enjoyed talking about science to non-scientists and bringing some meaning to the complex work taking place in the labs. “I wanted a chance to bring humour into the science, because most of the times science is presented in professional environments, it’s all very serious”, added Spatika.
“I would recommend FameLab for anyone who’s even a tiny bit interested in knowing what happens to science when it’s let out in the wild!”
Claudia Antolini, Public Engagement Manager at the University of Cambridge said, “We are delighted for Spatika to represent the UK at the International FameLab final. Both at the East of England regional competition and the UK final Spatika gave outstanding performances, scientifically accurate but also extremely engaging with wise-cracking humour. We wish her the best of luck and we look forward to cheering her on for the International Final.”
The FameLab final will be streamed live from CERN on YouTube.
Spatika Jayaram is a PhD student and Gates Cambridge Scholar in the Department of Physiology, Development and Neuroscience and Magdalene College. In her research, she looks at social and emotional behaviours emerging across development, and how regions within the prefrontal cortex contribute to their regulation. Her supervisor is Professor Angela Roberts.
FameLab was created by Cheltenham Festivals in 2005 and is the largest science communication competition and training programme in the world. Participants have just three minutes to convey a scientific concept of their choice to an audience and expert panel of judges with no presentations and limited props.
Earlier this month, Cambridge PhD student Spatika Jayaram was crowned the winner of the FameLab 2025 UK final at this year’s Cheltenham Science Festival.
The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.
Pilkington Prizes awarded to teaching staff

This year's prize winners are;
Dr Tore Butlin - Department of Engineering/Queens' College:
Tore has played a key role in reshaping the engineering course content and led the design of the new IA mechanics syllabus.
Dr Alexander Carter - Institute of Continuing Education/Fitzwilliam College:
As Academic Director for Philosophy & Interdisciplinary Studies, Alexander leads a broad-ranging portfolio of undergraduate and postgraduate courses in philosophy, creativity theory and research skills.
Dr Nicholas Evans - Department of Clinical Neurosciences/Wolfson College:
Nicholas has demonstrated an impressive commitment to medical education at the Clinical School for over a decade. As a mentor he has also shown a keen interest in student welfare.
Dr James Fergusson - Department of Applied Mathematics and Theoretical Physics:
James is an outstanding lecturer who brings outstanding passion to everything he does. He has been heavily involved in establishing and supporting the new MPhil in Data Intensive Science.
Dr Marta Halina - Department of History and Philosophy of Science/Selwyn College:
Marta has almost single-handedly overhauled the History and Philosophy of Science Tripos making it a more sought after course. She has led a major restructuring of the MPhil course and has introduced the increasingly popular module, AI in healthcare.
Paul Hoegger - University Language Centre/Faculty of Modern and Medieval Languages and Linguistics/Fitzwilliam College:
Paul is a teacher of German much respected by generations of students. Over the years he has created several new courses including one on German literature through the ages and one on the poetry of Schubert.
Dr Kate Hughes - Department of Veterinary Medicine/Girton College:
Kate makes a valued contribution to Years 4 - 6 of the veterinary programme. She led the design of a new final year rotation in anatomic pathology for which she is educational lead.
Dr Mairi Kilkenny - Department of Biochemistry/Queen's College:
Mairi delivers innovative and creative teaching with the Department of Biochemistry often incorporating digital media to stimulate the interest of her students. She's also a supervisor for several Colleges.
Dr Ewa Marek - Department of Chemical Engineering and Biotechnology/Jesus College:
Ewa is a valued lecturer, supervisor and Director of Studies. Passionate about sustainability, Ewa developed a new Part 1A course which introduces the topic in the context of chemical and biochemical engineering.
Dr Isabelle McNeill - Faculty of Modern and Medieval Languages and Linguistics/Trinity Hall:
Isabelle was a passionate and outstanding teacher who made vibrant contributions to French and to Film and Screen within the Faculty. A co-founder and trustee of the Cambridge Film Trust, Isabelle was made aware of her prize two days before she sadly passed away in February. She will be much missed by colleagues and students alike.
Dr Ali Meghji - Department of Sociology/Sidney Sussex College:
Ali has been instrumental in creating a whole new Tripos paper in the Department (Empire, Colonialism, Imperialism). As a teacher, he repeatedly receives glowing comments from students on the clarity of his exposition, the contemporary relevance of his topics, and his effective use of technology.
Dr Liam Saddington - Department of Geography/Lucy Cavendish College:
Liam was recruited as Training and Skills Director for the Tripos with a remit to oversee the quantitative and qualitative research training across the degree. He has led new innovations, such as creating a museum field trip for first-year students, organising a 'COP Cambridge' simulation for second-year students, and developing the dissertation 'research carousel'.
Dr Christopher Tilmouth - Faculty of English:
Chris' visionary leadership has reshaped both undergraduate and postgraduate education at Cambridge. As Director of Undergraduate Studies, Chris introduced critical reforms to enhance student progression.
Dr Juliet Usher-Smith - Department of Public Health and Primary Care/Emmanuel College:
Juliet has made important contributions to the Department through direct teaching, supervision and mentoring and goes the extra mile to foster a culture in which teaching and learning is valued by all.
The winners were presented with their awards by the University's Vice-Chancellor, Professor Deborah Prentice, at a ceremony also attended by Senior Pro-Vice-Chancellor (Education and Environmental Sustainability), Professor Bhaskar Vira. He said “The Pilkington Prize Award ceremony is one of my favourite events in the University calendar. It’s always deeply satisfying to see hard-working staff recognised for their commitment and dedication to teaching and learning. We all know that behind every great student is a great teacher and I feel privileged to work alongside such excellent colleagues.”
A total of fourteen dedicated and talented staff have been awarded the Pilkington Prize this year. The annual prizes are awarded in the name of Sir Alastair Pilkington to acknowledge excellence in teaching and to recognise the contribution each individual makes to a Department or Faculty.
The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.
Cambridge University academics recognised in King’s Birthday Honours 2025

Cambridge Zero Director Professor Emily Shuckburgh (Fellow of Darwin, Trinity alumna) has received a CBE for services to Climate Science and to the Public Communication of Climate Science.
"I am deeply honoured to accept this recognition, which is a reflection of the collective efforts of many scientists, communicators, educators, and advocates who strive every day to make climate science accurate, accessible and actionable at a time when honesty, clarity and urgency are more important than ever,” Professor Shuckburgh said.
Alongside leading the University of Cambridge’s major climate change initiative, Cambridge Zero, Emily is also Professor of Environmental Data Science at the Department of Computer Science and Technology. Her primary research is focused on the application of artificial intelligence to climate science and in this context she is Academic Director of the Institute of Computing for Climate Science, and co-Director of the UKRI Centre for Doctoral Training on the Application of AI to the study of Environmental Risks (AI4ER).
Professor Gordon Dougan (Fellow of Wolfson College), an Emeritus Professor who continues to work in the University’s Department of Medicine, and former Director of the Infection Health Challenge area at Wellcome, UK, has been awarded a CBE for services to Vaccines and to Global Health.
Professor Dougan is an internationally recognised expert in vaccinology, global health and infections. He was Head of Pathogens at the Wellcome Sanger Institute (WTSI) for over a decade and worked in the pharmaceutical industry (Wellcome Foundation/GSK) for part of his career, developing novel vaccines and other medicines. He has worked as an advisor to health agencies, industry, academia and regulatory agencies. He is an expert on the molecular basis of infection with a strong emphasis on pathogenic mechanisms/immunity, genomics, disease tracking and antibiotic resistance. He is currently President of the Microbiology Society of the UK.
He said: “I am delighted to receive this important recognition for my work and the people I have worked with and for. Applying science to the benefit of people and health is what I have been working toward throughout my career. I can recommend this path to anyone.”
Details of University alumni who are recognised in the King's Birthday Honours will be published on the University's alumni website.
The University extends its congratulations to all academics, staff and alumni who have received an honour.
Academics at the University of Cambridge are among those featured in the King's Birthday Honours 2025, which recognises the achievements and contributions of people across the UK.
The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.
Unprecedented new device at PPPL will help unravel mysteries of the universe
Inside the medical crash cart robot: Designing for urgency, collaboration, and clarity
Beth Lew-Williams Named 2025 Dan David Prize winner
A journey of resilience, fueled by learning
In 2021, Hilal Mohammadzai was set to begin his senior year at the American University of Afghanistan (AUAF), where he was working toward a bachelor’s degree in computer science. However, that August, the Taliban seized control of the Afghani government, and Mohammadzai’s education — along with that of thousands of other students — was put on hold.
“It was an uncertain future for all of the students,” says Mohammadzai.
Mohammadzai ultimately did receive his undergraduate degree from AUAF in May 2023 after months of disruption, and after transferring and studying for one semester at the American University of Bulgaria. As he was considering where to take his studies next, Mohammadzai heard about the MIT Emerging Talent Certificate in Computer and Data Science. His friend graduated from the program in early 2023 and had only positive things to say about the education, community, and network.
Creating opportunities to learn data science
Part of MIT Open Learning, Emerging Talent develops global education programs for talented individuals from challenging economic and social circumstances, equipping them with the knowledge and tools to advance their education and careers.
The Certificate in Computer and Data Science is a year-long online learning program for talented learners including refugees, migrants, and first-generation low-income students from historically marginalized backgrounds and underserved communities worldwide. The curriculum incorporates computer science and data analysis coursework from MITx, professional skill building, capstone projects, mentorship and internship options, and opportunities for networking with MIT’s global community.
Throughout his undergraduate coursework, Mohammadzai discovered an affinity for data visualization, and decided that he wanted to pursue a career in data science. The opportunity with the Emerging Talent program presented itself at the perfect time. Mohammadzai applied and was accepted into the 2023-24 cohort, earning a spot out of a pool of over 2,000 applicants.
“I thought it would be a great opportunity to learn more data science to build up on my existing knowledge,” he says.
Expanding and deepening his data science knowledge
Mohammadzai’s acceptance to the Emerging Talent program came around the same time that he began an MBA program at the American University of Central Asia in Kyrgyzstan. For him, the two programs made for a perfect pairing.
“When you have data science knowledge, you usually also require domain knowledge — whether it's in business or economics — to help with interpreting data and making decisions,” he says. “Analyzing the data is one piece, but understanding how to interpret that data and make a decision usually requires domain knowledge.”
Although Mohammadzai had some data science experience from his undergraduate coursework, he learned new skills and new approaches to familiar knowledge in the Emerging Talent program.
“Data structures were covered at university, but I found it much more in-depth in the MIT courses,” said Mohammadzai. “I liked the way it was explained with real-life examples.”
He worked with students from different backgrounds, and used Github for group projects. Mohammadzai also took advantage of personal agency and job-readiness workshops provided by the Emerging Talent team, such as how to pursue freelancing and build a mentorship network — skills that he has taken forward in life.
“I found it an exceptional opportunity,” he says. “The courses, the level of education, and the quality of education that was provided by MIT was really inspiring to me.”
Applying data skills to real-world situations
After graduating with his Certificate in Computer and Data Science, Mohammadzai began a paid internship with TomorrowNow, which was facilitated by introductions from the Emerging Talent team. Mohammadzai’s resume and experience stood out to the hiring team, and he was selected for the internship program.
TomorrowNow is a climate-tech nonprofit that works with philanthropic partners, commercial markets, R&D organizations, and local climate adaptation efforts to localize and open source weather data for smallholder farmers in Africa. The organization builds public capacity and facilitates partnerships to deploy and sustain next-generation weather services for vulnerable communities facing climate change, while also enabling equitable access to these services so that African farmers can optimize scarce resources such as water and farm inputs.
Leveraging philanthropy as seed capital, TomorrowNow aims to de-risk weather and climate technologies to make high-quality data and products available for the public good, ultimately incentivizing the private sector to develop products that reach last-mile communities often excluded from advancements in weather technology.
For his internship, Mohammadzai worked with TomorrowNow climatologist John Corbett to understand the weather data, and ultimately learn how to analyze it to make recommendations on what information to share with customers.
“We challenged Hilal to create a library of training materials leveraging his knowledge of Python and targeting utilization of meteorological data,” says Corbett. “For Hilal, the meteorological data was a new type of data and he jumped right in, working to create training materials for Python users that not only manipulated weather data, but also helped make clear patterns and challenges useful for agricultural interpretation of these data. The training tools he built helped to visualize — and quantify — agricultural meteorological thresholds and their risk and potential impact on crops.”
Although he had previously worked with real-world data, working with TomorrowNow marked Mohammadzai’s first experience in the domain of climate data. This area presented a unique set of challenges and insights that broadened his perspective. It not only solidified his desire to continue on a data science path, but also sparked a new interest in working with mission-focused organizations. Both TomorrowNow and Mohammadzai would like to continue working together, but he first needs to secure a work visa.
Without a visa, Mohammadzai cannot work for more than three to four hours a day, which makes securing a full-time job impossible. Back in 2021, the American University of Afghanistan filed a P-1 (priority one) asylum case for their students to seek resettlement in the United States because of the potential threat posed to them by the Taliban.
Mohammadzai’s hearing was scheduled for Feb. 1, but it was postponed after the program was suspended early this year.
As Mohammadzai looks to the end of his MBA program, his future feels uncertain. He has lived abroad since 2021 thanks to student visas and scholarships, but until he can secure a work visa he has limited options. He is considering pursuing a PhD program in order to keep his student visa status, while he waits on news about a more permanent option.
“I just want to find a place where I can work and contribute to the community.”
© Photo courtesy of Hilal Mohammadzai.
First-of-its-kind device profiles newborns’ immune function
© Photo courtesy of the KK Women’s and Children’s Hospital.
First-of-its-kind device profiles newborns’ immune function
© Photo courtesy of the KK Women’s and Children’s Hospital.
After more than a decade of successes, ESI’s work will spread out across the Institute
MIT’s Environmental Solutions Initiative (ESI), a pioneering cross-disciplinary body that helped give a major boost to sustainability and solutions to climate change at MIT, will close as a separate entity at the end of June. But that’s far from the end for its wide-ranging work, which will go forward under different auspices. Many of its key functions will become part of MIT’s recently launched Climate Project. John Fernandez, head of ESI for nearly a decade, will return to the School of Architecture and Planning, where some of ESI’s important work will continue as part of a new interdisciplinary lab.
When the ideas that led to the founding of MIT’s Environmental Solutions Initiative first began to be discussed, its founders recall, there was already a great deal of work happening at MIT relating to climate change and sustainability. As Professor John Sterman of the MIT Sloan School of Management puts it, “there was a lot going on, but it wasn’t integrated. So the whole added up to less than the sum of its parts.”
ESI was founded in 2014 to help fill that coordinating role, and in the years since it has accomplished a wide range of significant milestones in research, education, and communication about sustainable solutions in a wide range of areas. Its founding director, Professor Susan Solomon, helmed it for its first year, and then handed the leadership to Fernandez, who has led it since 2015.
“There wasn’t much of an ecosystem [on sustainability] back then,” Solomon recalls. But with the help of ESI and some other entities, that ecosystem has blossomed. She says that Fernandez “has nurtured some incredible things under ESI,” including work on nature-based climate solutions, and also other areas such as sustainable mining, and reduction of plastics in the environment.
Desiree Plata, director of MIT’s Climate and Sustainability Consortium and associate professor of civil and environmental engineering, says that one key achievement of the initiative has been in “communication with the external world, to help take really complex systems and topics and put them in not just plain-speak, but something that’s scientifically rigorous and defensible, for the outside world to consume.”
In particular, ESI has created three very successful products, which continue under the auspices of the Climate Project. These include the popular TIL Climate Podcast, the Webby Award-winning Climate Portal website, and the online climate primer developed with Professor Kerry Emanuel. “These are some of the most frequented websites at MIT,” Plata says, and “the impact of this work on the global knowledge base cannot be overstated.”
Fernandez says that ESI has played a significant part in helping to catalyze what has become “a rich institutional landscape of work in sustainability and climate change” at MIT. He emphasizes three major areas where he feels the ESI has been able to have the most impact: engaging the MIT community, initiating and stewarding critical environmental research, and catalyzing efforts to promote sustainability as fundamental to the mission of a research university.
Engagement of the MIT community, he says, began with two programs: a research seed grant program and the creation of MIT’s undergraduate minor in environment and sustainability, launched in 2017.
ESI also created a Rapid Response Group, which gave students a chance to work on real-world projects with external partners, including government agencies, community groups, nongovernmental organizations, and businesses. In the process, they often learned why dealing with environmental challenges in the real world takes so much longer than they might have thought, he says, and that a challenge that “seemed fairly straightforward at the outset turned out to be more complex and nuanced than expected.”
The second major area, initiating and stewarding environmental research, grew into a set of six specific program areas: natural climate solutions, mining, cities and climate change, plastics and the environment, arts and climate, and climate justice.
These efforts included collaborations with a Nobel Peace Prize laureate, three successive presidential administrations from Colombia, and members of communities affected by climate change, including coal miners, indigenous groups, various cities, companies, the U.N., many agencies — and the popular musical group Coldplay, which has pledged to work toward climate neutrality for its performances. “It was the role that the ESI played as a host and steward of these research programs that may serve as a key element of our legacy,” Fernandez says.
The third broad area, he says, “is the idea that the ESI as an entity at MIT would catalyze this movement of a research university toward sustainability as a core priority.” While MIT was founded to be an academic partner to the industrialization of the world, “aren’t we in a different world now? The kind of massive infrastructure planning and investment and construction that needs to happen to decarbonize the energy system is maybe the largest industrialization effort ever undertaken. Even more than in the recent past, the set of priorities driving this have to do with sustainable development.”
Overall, Fernandez says, “we did everything we could to infuse the Institute in its teaching and research activities with the idea that the world is now in dire need of sustainable solutions.”
Fernandez “has nurtured some incredible things under ESI,” Solomon says. “It’s been a very strong and useful program, both for education and research.” But it is appropriate at this time to distribute its projects to other venues, she says. “We do now have a major thrust in the Climate Project, and you don’t want to have redundancies and overlaps between the two.”
Fernandez says “one of the missions of the Climate Project is really acting to coalesce and aggregate lots of work around MIT.” Now, with the Climate Project itself, along with the Climate Policy Center and the Center for Sustainability Science and Strategy, it makes more sense for ESI’s climate-related projects to be integrated into these new entities, and other projects that are less directly connected to climate to take their places in various appropriate departments or labs, he says.
“We did enough with ESI that we made it possible for these other centers to really flourish,” he says. “And in that sense, we played our role.”
As of June 1, Fernandez has returned to his role as professor of architecture and urbanism and building technology in the School of Architecture and Planning, where he directs the Urban Metabolism Group. He will also be starting up a new group called Environment ResearchAction (ERA) to continue ESI work in cities, nature, and artificial intelligence.
© Photo: Casey Atkins
Be a student for a week
New biomaterial developed by NUS researchers shows how ageing in the heart could be reversed
A new lab-grown material has revealed that some of the effects of ageing in the heart may be slowed and even reversed. The discovery could open the door to therapies that rejuvenate the heart by changing its cellular environment, rather than focusing on the heart cells themselves.
The research, published recently in Nature Materials, was carried out by a team led by Assistant Professor Jennifer Young from the Department of Biomedical Engineering in the College of Design and Engineering (CDE) at the National University of Singapore (NUS). Asst Prof Young is also a scientist at the NUS Mechanobiology Institute (MBI).
The team focused on the extracellular matrix (ECM)—the complex framework that surrounds and supports heart cells. This net-like scaffolding made of proteins and other components holds cells in place and sends chemical signals that guide how the cells function.
As the heart ages, the ECM becomes stiffer and its biochemical composition changes. These changes can trigger harmful activity in heart cells, contributing to scarring, loss of flexibility, and reduced function.
“Most ageing research focuses on how cells change over time,” said Asst Prof Young. “Our study looks instead at the ECM and how changes in this environment affect heart ageing.”
To investigate this, the team developed a hybrid biomaterial called DECIPHER (DECellularized In Situ Polyacrylamide Hydrogel-ECM hybrid), made by combining natural heart tissue with a synthetic gel to closely mimic the stiffness and composition of the ECM.
“Until now, it’s been difficult to pinpoint which of these changes—physical stiffness or biochemical signals—play the bigger role in driving this decline, because they usually happen at the same time,” said Avery Rui Sun, PhD student in NUS CDE and MBI, and first author of the study.
“The DECIPHER platform solves this problem, allowing researchers to independently control the stiffness and the biochemical signals presented to the cells—something no previous system using native tissue has been able to do.”
When the team placed aged heart cells onto DECIPHER scaffolds that mimicked ‘young’ ECM cues, they found that the cells began to behave more like young cells—even when the material remained stiff. Closer investigation revealed that this included a shift in gene activity across thousands of genes associated with ageing and cell function.
In contrast, young cells placed on ‘aged’ ECM began to show signs of dysfunction, even if the scaffold was soft.
“This showed us that the biochemical environment around aged heart cells matters more than stiffness, while young cells take in both cues,” said Asst Prof Young.
“Even when the tissue was very stiff, as it typically is in aged hearts, the presence of ‘young’ biochemical signals was enough to push aged cells back toward a healthier, more functional state,” she added. “This suggests that if we can find a way to restore these signals in the ageing heart, we might be able to reverse some of the damage and improve how the heart functions over time.”
“On the other hand, for young heart cells, we found that higher stiffness can cause them to prematurely ‘age’, suggesting that if we target specific aspects of ECM ageing, we might slow or prevent heart dysfunction over time.”
While the work is still in the research phase, the team says their findings open up a new direction for therapies aimed at preserving or restoring heart health during ageing—by targeting the ECM itself. They hope the method could also be adapted to study other tissues affected by ageing and disease. Beyond the heart, the team believes the DECIPHER method could be applied to study ageing and disease in other organs as well due to the major role of the ECM in cell function across all our tissues.
“Many age-related diseases involve changes in tissue stiffness—not just in the heart,” said Asst Prof Young. “For example, the same approach could be applied to kidney and skin tissue, and it could be adapted to study conditions like fibrosis or even cancer, where the mechanical environment plays a major role in how cells behave.”
The study was a collaboration involving researchers from CDE, MBI, Department of Biological Sciences at the NUS Faculty of Science, NUS Yong Loo Lin School of Medicine and Cardiovascular Research Institute at the National University Heart Centre Singapore.
AI tool developed to ensure colorectal cancer patients receive correct dose of chemotherapy
University of Melbourne and Western Health researchers have developed a new artificial intelligence tool to prevent cancer patients from receiving incorrect doses of chemotherapy.
Senior Thesis Spotlight: A concerto inspired by history, art and the 'rough edges of Rush'
Senior Thesis Spotlight: What isotopes in redwood leaves reveal about dinosaur diets
Senior Thesis Spotlight: A ‘high-risk, but well-defined’ idea to advance quantum computing
Princeton Summer Theater opens its 2025 season
Six exceptional scholars selected as Princeton’s 2025-26 Fung Global Fellows
Decarbonizing steel is as tough as steel
The long-term aspirational goal of the Paris Agreement on climate change is to cap global warming at 1.5 degrees Celsius above preindustrial levels, and thereby reduce the frequency and severity of floods, droughts, wildfires, and other extreme weather events. Achieving that goal will require a massive reduction in global carbon dioxide (CO2) emissions across all economic sectors. A major roadblock, however, could be the industrial sector, which accounts for roughly 25 percent of global energy- and process-related CO2 emissions — particularly within the iron and steel sector, industry’s largest emitter of CO2.
Iron and steel production now relies heavily on fossil fuels (coal or natural gas) for heat, converting iron ore to iron, and making steel strong. Steelmaking could be decarbonized by a combination of several methods, including carbon capture technology, the use of low- or zero-carbon fuels, and increased use of recycled steel. Now a new study in the Journal of Cleaner Production systematically explores the viability of different iron-and-steel decarbonization strategies.
Today’s strategy menu includes improving energy efficiency, switching fuels and technologies, using more scrap steel, and reducing demand. Using the MIT Economic Projection and Policy Analysis model, a multi-sector, multi-region model of the world economy, researchers at MIT, the University of Illinois at Urbana-Champaign, and ExxonMobil Technology and Engineering Co. evaluate the decarbonization potential of replacing coal-based production processes with electric arc furnaces (EAF), along with either scrap steel or “direct reduced iron” (DRI), which is fueled by natural gas with carbon capture and storage (NG CCS DRI-EAF) or by hydrogen (H2 DRI-EAF).
Under a global climate mitigation scenario aligned with the 1.5 C climate goal, these advanced steelmaking technologies could result in deep decarbonization of the iron and steel sector by 2050, as long as technology costs are low enough to enable large-scale deployment. Higher costs would favor the replacement of coal with electricity and natural gas, greater use of scrap steel, and reduced demand, resulting in a more-than-50-percent reduction in emissions relative to current levels. Lower technology costs would enable massive deployment of NG CCS DRI-EAF or H2 DRI-EAF, reducing emissions by up to 75 percent.
Even without adoption of these advanced technologies, the iron-and-steel sector could significantly reduce its CO2 emissions intensity (how much CO2 is released per unit of production) with existing steelmaking technologies, primarily by replacing coal with gas and electricity (especially if it is generated by renewable energy sources), using more scrap steel, and implementing energy efficiency measures.
“The iron and steel industry needs to combine several strategies to substantially reduce its emissions by mid-century, including an increase in recycling, but investing in cost reductions in hydrogen pathways and carbon capture and sequestration will enable even deeper emissions mitigation in the sector,” says study supervising author Sergey Paltsev, deputy director of the MIT Center for Sustainability Science and Strategy (MIT CS3) and a senior research scientist at the MIT Energy Initiative (MITEI).
This study was supported by MIT CS3 and ExxonMobil through its membership in MITEI.
© Photo courtesy of the American Iron and Steel Institute.
Decarbonizing steel is as tough as steel
The long-term aspirational goal of the Paris Agreement on climate change is to cap global warming at 1.5 degrees Celsius above preindustrial levels, and thereby reduce the frequency and severity of floods, droughts, wildfires, and other extreme weather events. Achieving that goal will require a massive reduction in global carbon dioxide (CO2) emissions across all economic sectors. A major roadblock, however, could be the industrial sector, which accounts for roughly 25 percent of global energy- and process-related CO2 emissions — particularly within the iron and steel sector, industry’s largest emitter of CO2.
Iron and steel production now relies heavily on fossil fuels (coal or natural gas) for heat, converting iron ore to iron, and making steel strong. Steelmaking could be decarbonized by a combination of several methods, including carbon capture technology, the use of low- or zero-carbon fuels, and increased use of recycled steel. Now a new study in the Journal of Cleaner Production systematically explores the viability of different iron-and-steel decarbonization strategies.
Today’s strategy menu includes improving energy efficiency, switching fuels and technologies, using more scrap steel, and reducing demand. Using the MIT Economic Projection and Policy Analysis model, a multi-sector, multi-region model of the world economy, researchers at MIT, the University of Illinois at Urbana-Champaign, and ExxonMobil Technology and Engineering Co. evaluate the decarbonization potential of replacing coal-based production processes with electric arc furnaces (EAF), along with either scrap steel or “direct reduced iron” (DRI), which is fueled by natural gas with carbon capture and storage (NG CCS DRI-EAF) or by hydrogen (H2 DRI-EAF).
Under a global climate mitigation scenario aligned with the 1.5 C climate goal, these advanced steelmaking technologies could result in deep decarbonization of the iron and steel sector by 2050, as long as technology costs are low enough to enable large-scale deployment. Higher costs would favor the replacement of coal with electricity and natural gas, greater use of scrap steel, and reduced demand, resulting in a more-than-50-percent reduction in emissions relative to current levels. Lower technology costs would enable massive deployment of NG CCS DRI-EAF or H2 DRI-EAF, reducing emissions by up to 75 percent.
Even without adoption of these advanced technologies, the iron-and-steel sector could significantly reduce its CO2 emissions intensity (how much CO2 is released per unit of production) with existing steelmaking technologies, primarily by replacing coal with gas and electricity (especially if it is generated by renewable energy sources), using more scrap steel, and implementing energy efficiency measures.
“The iron and steel industry needs to combine several strategies to substantially reduce its emissions by mid-century, including an increase in recycling, but investing in cost reductions in hydrogen pathways and carbon capture and sequestration will enable even deeper emissions mitigation in the sector,” says study supervising author Sergey Paltsev, deputy director of the MIT Center for Sustainability Science and Strategy (MIT CS3) and a senior research scientist at the MIT Energy Initiative (MITEI).
This study was supported by MIT CS3 and ExxonMobil through its membership in MITEI.
© Photo courtesy of the American Iron and Steel Institute.
How ‘supergenes’ help fish evolve into new species

Why are there so many different kinds of animals and plants on Earth? One of biology’s big questions is how new species arise and how nature’s incredible diversity came to be.
Cichlid fish from Lake Malawi in East Africa offer a clue. In this single lake, over 800 different species have evolved from a common ancestor in a fraction of the time it took for humans and chimpanzees to evolve from their common ancestor.
What’s even more remarkable is that the diversification of cichlids happened all in the same body of water. Some of these fish became large predators, others adapted to eat algae, sift through sand, or feed on plankton. Each species found its own ecological niche.
Now, researchers from the Universities of Cambridge and Antwerp have determined how this evolution may have happened so quickly. Their results are reported in the journal Science.
The researchers looked at the DNA of over 1,300 cichlids to see if there’s something special about their genes that might explain this rapid evolution. “We discovered that, in some species, large chunks of DNA on five chromosomes are flipped – a type of mutation called a chromosomal inversion,” said senior author Hennes Svardal from the University of Antwerp.
Normally, when animals reproduce, their DNA gets reshuffled in a process called recombination – mixing the genetic material from both parents. But this mixing is blocked within a chromosomal inversion. This means that gene combinations within the inversion are passed down intact without mixing, generation after generation, keeping useful adaptations together and speeding up evolution.
“It’s sort of like a toolbox where all the most useful tools are stuck together, preserving winning genetic combinations that help fish adapt to different environments,” said first author Moritz Blumer from Cambridge’s Department of Genetics.
These preserved sets of genes are sometimes called ‘supergenes. In Malawi cichlids, the supergenes seem to play several important roles. Although cichlid species can still interbreed, the inversions help keep species separate by preventing their genes from blending too much. This is especially useful in parts of the lake where fish live side by side – like in open sandy areas where there’s no physical separation between habitats.
The genes inside these supergenes often control traits that are key for survival and reproduction – such as vision, hearing, and behaviour. For example, fish living deep in the lake (down to 200 meters) need different visual abilities than those near the surface, require different food, and need to survive at higher pressures. Their supergenes help maintain those special adaptations.
“When different cichlid species interbred, entire inversions can be passed between them – bringing along key survival traits, like adaptations to specific environments, speeding up the process of evolution,” said Blumer.
The inversions also frequently act as sex chromosomes, helping determine whether a fish becomes male or female. Since sex chromosomes can influence how new species form, this opens new questions about how evolution works.
“While our study focused on cichlids, chromosomal inversions aren’t unique to them,” said co-senior author Professor Richard Durbin, from Cambridge’s Department of Genetics. “They’re also found in many other animals — including humans — and are increasingly seen as a key factor in evolution and biodiversity.”
“We have been studying the process of speciation for a long time,” said Svardal. “Now, by understanding how these supergenes evolve and spread, we’re getting closer to answering one of science’s big questions: how life on Earth becomes so rich and varied.”
Reference:
L. M. Blumer, V. Burskaia, I. Artiushin, J. Saha et al. ‘Introgression dynamics of sex- linked chromosomal inversions shape the Malawi cichlid radiation.’ Science (2025). DOI: 10.1126/science.adr9961
Researchers have found that chunks of ‘flipped’ DNA can help fish quickly adapt to new habitats and evolve into new species, acting as evolutionary ‘superchargers’.
The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.
Bacteria fight and feast with the same tool
3 friends, 104 miles, and a tradition of taking the scenic route

Eden Fisher (from left), Amelia Heymach, and Addie Kelsey.
Photo illustration by Liz Zonarich/Harvard Staff
3 friends, 104 miles, and a tradition of taking the scenic route
Trio marked each year with a walk to a different New England state
Eileen O’Grady
Harvard Staff Writer
At 4:30 a.m., with headlamps and backpacks strapped on, Amelia Heymach, Eden Fisher, and Addie Kelsey stepped out of Currier House and began walking — southwest through Watertown and Newton, bound for the Connecticut border. With just two weeks until Commencement, the three seniors had one last goal to cross off their College bucket list: a 47-mile walk to commemorate their undergraduate journey.
For Heymach, Fisher, and Kelsey, who became friends on the first day of freshman year, long walks have become a tradition to mark the end of each academic year: As sophomores, they walked to New Hampshire; as juniors to Rhode Island. These ultra-walks might seem extreme, but this trio says they are a way to spend time together while testing their endurance, trust, and commitment.
“People do big walks at transition points in their lives, and it’s not by accident,” Heymach said. “It’s a great opportunity to reflect and center yourself and to think about your goals for the future and reflect on the past. There’s something about those walks that’s so conducive to that sort of thinking, to put away your devices, to be outside, to connect with the lands around you.”

None of them are strangers to trekking. Heymach, a statistics concentrator with a secondary in global health and health policy, hiked the Camino de Santiago in Spain with her mom during a gap year. Fisher, a joint concentrator in integrative biology and math with a secondary in studies of women, gender, and sexuality, is a lifelong runner with four marathons and an ultramarathon under her belt. Kelsey, a psychology concentrator with secondary in integrative biology, took many walks with her family growing up.
The plans for their 25-mile walk to New Hampshire formed spontaneously after a late-night study session during final exam week of their sophomore fall. They had heard of some students who had walked to the state line, and wanted to see if they could do it, too.
“We went home to sleep, and then the next morning woke up super early and started,” Kelsey said. “There was no preparation.”
Their route, suggested by Google Maps, took them through Lexington, Burlington, and Billerica, sometimes through residential neighborhoods, other times through industrial areas, often with no sidewalks. The December sun was setting as they walked through Lowell, and then crossed the border into Pelham, New Hampshire, with a time of just under 10 hours.
“We got there, and it was dark. We had headlamps.” Heymach recalled. “Cars were going fast. We were on the side of the curb just running to the finish.”
For the return trip they called an Uber.
“It’s so funny Ubering back in like 40 minutes after you spent the entire day from before the sun has risen to sunset, walking,” Kelsey said. “You’re just seeing everything you passed flash by.”
Junior year they walked 32 miles to Rhode Island, heading south through Massachusetts towns including Dedham, Norwood, Walpole, and Wrentham. They crossed the border into Cumberland, Rhode Island, after 12 hours.
To pass the time while walking, the three sing songs and read aloud to each other from books they find in Little Free Libraries. They pack snacks and usually stop to buy pastries and sandwiches along the way.
“It’s a great opportunity to reflect and center yourself and to think about your goals for the future and reflect on the past.”
Amelia Heymach
All three agreed that at the end of a semester of rigorous academics and extracurriculars, something as simple as a long walk is a welcome change.
“I love the speed of a walk,” Heymach said. “Things can feel super fast-paced for many months at a time here. It’s saying, ‘No, we’re going to go our 2.5 miles-per-hour pace for as long as we want to.’”
Fisher agreed. “It allows you to slow down and enjoy things in a different way. I’ve been learning to appreciate a different pace.”
Last month, with Commencement on the horizon, the friends decided it was time to attempt Connecticut. On May 16 they headed southwest through Wellesley, Holliston, Milford, Mendon, and Douglas. To keep momentum, they developed mind games to stay mentally fresh. One rule? They weren’t allowed to ask how much farther they had to go.
“With walking there’s the physical aspect to it — you feel like your legs may be falling off toward the end — but a lot of it is mental,” Heymach said. “You have to tell yourself you’re not walking with a destination, you’re walking indefinitely.”
The final miles took them along the Southern New England Trunkline Trail through Douglas State Forest after dark, where they heard peeping frogs and spotted a beaver. They crossed the border into Thompson, Connecticut, around 10 p.m.
Next year they will be in three different countries, with Heymach doing community health work in Ecuador, Kelsey studying psychology at the University of Cambridge in England, and Fisher at Harvard’s Graduate School of Education.
“I have faith in the friendships that I have, and trust that we’ll be there to support each other regardless of where we are,” Heymach said. “I’m excited to do hikes with them in the future.”
They already have a few in mind: Kelsey has her eye on the Camino de Santiago, Heymach on the Lone Star Hiking Trail in Texas, and Fisher on the North-South Trail in Rhode Island. Heymach and Fisher are also interested in the Long Trail in Vermont.
Plus, there are a few more nearby states they haven’t reached by foot.
“We haven’t gone to New York yet. Or Maine,” Kelsey added. “But New York is far, so we’ll have to split it into a couple of days next time.”
From ‘joyous’ to ‘erotically engaged’ to ‘white-hot angry’
From ‘joyous’ to ‘erotically engaged’ to ‘white-hot angry’
Stephanie Burt’s new anthology rounds up 51 works by queer and trans poets spanning generations
Eileen O’Grady
Harvard Staff Writer

Niles Singer/Harvard Staff Photographer
As Stephanie Burt sees it, queer lyric poetry mirrors the patterns of queer life. She offers many examples in “Super Gay Poems: LGBTQIA+ Poetry After Stonewall,” a new anthology of 51 works by queer and trans poets from the last 55 years.
“I chose only poems I admire and wanted to write an essay about,” said Burt, Donald P. and Katherine B. Loker Professor of English, who paired each work with an original essay providing analysis and historical context. “I looked for stylistic range, from concise to effusive, rhymed-formal to free and chaotic, weird-and-challenging to apparently pellucid. I also looked for emotional range, from joyous to erotically engaged to white-hot angry, quietly curious, resolved, mournful, inviting, shy, and extroverted.”
Burt’s chosen poets span generations, from Frank O’Hara (1926-1966) to Logan February (born in 1999), and address major moments in modern queer history — from Paul Monette’s “The Worrying” (1988) responding to the HIV/AIDS crisis, to Jackie Kay’s “Mummy and donor and Deirdre” (1991), which explores the increasing visibility of queer families.
The anthology also reflects a wide geographic range, from Puerto Rican writer Roque Salas Rivera to Singaporean writer Stephanie Chan. Some names, like Audre Lorde and Adrienne Rich, will be familiar to many readers, while others may offer new discoveries.
In this edited conversation with the Gazette, Burt discusses shifts in the poetry landscape, the thrill of discovering new voices, and the power of poetry to capture historic moments.

Could you talk more about the notion of time in LGBTQIA+ poetry?
Many of us grew up with a very clear, normative set of expectations about how life works: You’re a child, then a teen, and then an adult, and then you’re old. You hang out in same-sex friend groups, and then you date, and then you “get serious,” and then you get married and have kids and raise kids, ideally 2.6 of them.
Often queer time, as encapsulated and addressed in queer poems, works differently. You don’t “grow up” if growing up means abandoning your intimate same-sex attachments in favor of straight-passing dating. Or you, as an adult, derive your energy from the kinds of exciting parties adults are supposed to abandon. Or maybe you go through puberty twice and feel (or act) like a baffled, excitable teen when you’re an adult. Even if you do end up monogamously connected to one stable partner for decades, as several of my poets did, the poetry about that connection works differently: It can feel more hard-won or feel like a former secret.
What changes do we see in LGBTQIA+ poetry after the 1969 Stonewall Uprising?
Post-Stonewall, we see more people come out. More people celebrate, openly, long-term relationships. People raise kids and attend to a next generation. During the 1970s, lesbian poets celebrate lesbian-only or women-only spaces; later on, not so much. During the 1980s and early 1990s, a whole generation — especially, but not only, gay men — lose half or more of their friends to HIV/AIDS, which is still a killer in much of the world but shows up less often in poems.
During the 2010s, people come out as trans or nonbinary. Also during the 2000s and 2010s, more people in the Global South come out and write poems about how their multiple identities and forms of belonging intersect. Sometimes they feel welcome where they grew up, and sometimes — as is the case with Logan February, whose poem “Prayer of the Slut” (2020) is included in the book — they do not and cannot.
Did you make any new discoveries while compiling this book?
Yes, I had never encountered Cherry Smyth, whose poem “In the South That Winter” (2001) is included, or Logan February. I hadn’t paid enough close attention to Melvin Dixon (“The Falling Sky,” written 1992, published 1995), nor to Judy Grahn (“Carol, in the park, chewing on straws,” 1970) until this book gave me the chance to re-examine their work.
Do you see poetry as a tool for documenting LGBTQIA+ history?
All art forms document history because all works of art come from historic moments. I picked the poems here because I loved them all, but some parts of global queer history in English don’t show up because I didn’t or couldn’t find awesome poems about them: queer liberation in the Republic of Ireland, for example, or the relationship between the Filipino diaspora and Filipino/a/x bakla identities. For that latter I recommend Rob Macaisa Colgate’s amazeballs book, published after “Super Gay Poems” went to press.
I did happily — but also sadly — include poems tied to big-deal historic moments such as Paul Monette’s AIDS Coalition to Unleash Power verse [“The Worrying”] which really needs another look these days. That said, some of my favorite poems have nothing to do with big chapters in public history — they construct aesthetic refuges from it, or alternatives to it. May Swenson’s “Found in Diary Dated May 29, 1973” (first book publication 2003), for example, an allegory of lesbian love via plant roots. You can connect it to history if you like, but that’s not what the poem invites us to do first or last.
What do you hope readers will take away from this anthology?
New favorite poets! New favorite poems! And a sense of the queer and trans and ace and intersex and pan and so on possibilities out there today, even at this troubled time for so many of us — possibilities that include fears and catastrophes but also resilience, community, solidarity, and joy.
Turning 2 decades of discovery into impact

Turning 2 decades of discovery into impact
Isaac Kohlberg to step down as senior associate provost and chief technology development officer
Kirsten Mabry
Harvard Office of Technology Development
After 20 years of service to the University, Isaac Kohlberg will step down from his role as Harvard University’s senior associate provost and chief technology development officer at the end of this year, concluding an extraordinary chapter that has significantly influenced Harvard’s vision and strategy for advancing research for the public good.
Kohlberg’s two decades at Harvard have been dedicated to one mission: advancing the University’s discoveries into practical applications that deliver impactful solutions for society. He played a key role in building broad corporate relationships and developing commercialization strategies to further advance Harvard research. Kohlberg joined Harvard in 2005 and established the Office of Technology Development (OTD) by consolidating the University’s technology transfer efforts into one centralized program. This initiative aims to advance innovations emerging from Harvard labs through licensing and the creation of startups, while also fostering corporate collaborations with industrial entities.
Kohlberg built a team with extensive industry experience and strong technical backgrounds. Together, they established a proactive culture in which OTD team members serve as the primary point of contact for Harvard researchers, facilitating industry collaborations and venture creation. Under Kohlberg’s leadership, there was a shift in how Harvard approached the commercialization of innovations developed in its labs; the University’s strategy became more supportive and engaged, increasing the pace of startup formation and pursuing industry relationships to advance the University’s science.
“Investments in science help advance knowledge, fuel progress, and spur economic development. That sense of mission runs through Harvard’s innovation enterprise, and we are grateful for the leadership role Isaac played in supporting a thriving culture of discovery and innovation,” said Harvard University Provost John Manning.
Kohlberg is widely recognized for his vision in forging robust collaborations between academia and industry, as well as for establishing funding mechanisms that bridge the critical development gap between academic research and real-world therapies or applications. Under his leadership, OTD established three accelerator funds: the Blavatnik Biomedical Accelerator, the Grid Accelerator, and the Climate and Sustainability Translational Fund. These initiatives provide essential funding and business development support to translational research projects. Consequently, these accelerators have played a crucial role in enabling research teams to commercialize their discoveries, resulting in the creation of numerous startups and licensed technologies derived from foundational research conducted at Harvard.
These accelerators have supported hundreds of research projects, launched numerous startups, and drawn millions in industry and venture investment back to Harvard. To name just a few, research backed by these accelerators has resulted in innovative cancer therapies licensed to biotech firms, a startup developing a new class of antibiotics created by Harvard chemists, and Harvard-developed technologies that are now featured in products worldwide.
Additionally, under Kohlberg’s leadership, the University formed landmark collaborations with global companies. These relationships brought not only research funding — industry-sponsored research more than doubled during his tenure — but also infused new energy and opportunities into innovations being developed at Harvard.
In the past five years alone, the advancement of Harvard research has resulted in the launch of 96 startups raising nearly $2.8 billion, more than 2,000 reports of innovations, 897 U.S. patents held by Harvard, and $300 million in research funding through industry partnerships.
Crucially, Kohlberg was also dedicated to safeguarding academic independence and making a broad societal impact. Harvard’s approach to corporate relationships — an effort that has grown into a robust collaboration between OTD, the Office of the Vice Provost for Research, and others — ensures that faculty members determine research agendas and that discoveries, even when developed commercially, remain accessible for humanitarian licensing or use in developing countries.
Kohlberg leaves behind a program recognized nationally as a model of excellence — one that combines deep expertise in industry relations, commercialization, and venture creation with a distinctly Harvard-style sense of duty to society.
“As I begin the next chapter of my life, I am deeply proud of all we have accomplished together,” Kohlberg reflected. “Harvard’s community has shown what’s possible when great ideas are met with entrepreneurial spirit, smart funding, and a commitment to the public good. The next wave of discovery and impact is just beginning.”
Before joining Harvard in 2005, Kohlberg was the CEO of Tel Aviv University’s Economic Corporation. From 1989 to 2000, he held various roles supporting innovation at New York University, including vice provost, vice president for industrial liaison, and head of the NYU School of Medicine’s Office of Science and Research. From 1982 to 1989, he served as CEO of YEDA, the commercial arm of the Weizmann Institute of Science in Israel.
Harvard will launch a search for Kohlberg’s successor in the coming weeks, and Kohlberg has committed to serve as an adviser to the University.
Edmund White, professor of creative writing and ‘iconic gay writer of the 20th and 21st centuries,’ dies at 85
S$2 million gift from Tahir Foundation to support Indonesian students at NUS and NTU Singapore, strengthen bilateral ties, and nurture regional talent
The Tahir Foundation, established by prominent Indonesian philanthropist Dato’ Sri Professor Dr Tahir, Chairman of Mayapada Group, has pledged S$2 million to the National University of Singapore (NUS) and Nanyang Technological University, Singapore (NTU Singapore) in support of Indonesian students at the two universities. The gift was announced at a ceremony held on 12 June 2025, at NUS.
Each university will receive S$1 million for student financial aid, enabling more Indonesian students, regardless of their financial circumstances, to benefit from a world-class education in Singapore and to reinforce the strong people-to-people ties between Indonesia and Singapore.
A shared vision for ASEAN’s future
The gift reflects a broader regional ambition: nurturing future leaders, strengthening ASEAN’s talent pipeline, and deepening collaboration across borders. It also affirms the universities’ shared commitment to inclusive education and regional impact.
Dato’ Sri Professor Dr Tahir, Chairman of Mayapada Group said: “I strongly believe that education is fundamental in uplifting society and transforming our collective future. NUS and NTU, as leading global universities, are well-positioned to contribute significantly to Singapore and the world. I hope this gift will enable Indonesian students to realise their potential and inspire them to give back to their communities.”
NUS President Professor Tan Eng Chye said: “At NUS, we believe education is transformative in empowering individuals and uplifting communities. This gift will open opportunities for more Indonesian students to pursue tertiary education at world-class institutions in Singapore, and help to build a more collaborative and resilient Southeast Asia region, by fostering ties, familiarity and connections between the youths of Indonesia and Singapore. We hope this gift will inspire others to also step forward to support and nurture the next generation of leaders in the region through education.”
NTU President Professor Ho Teck Hua said: “Dr Tahir’s gift reflects his belief in the power of education to transcend borders. It enables more talented Indonesian students to pursue their aspirations at NTU, opening doors to life-changing opportunities that can transform not just individuals, but entire communities. The gift embodies our shared conviction that education plays a pivotal role in nurturing future-ready talent and in fostering a more connected region.”
A lifelong commitment to education
Dr Tahir has long championed education as a key driver of social mobility. He studied on a scholarship at Nanyang University, which later merged with the University of Singapore in 1980 to form NUS. He graduated in 1976 with a Bachelor of Commerce and is recognised as a distinguished alumnus of both NUS and NTU.
Over the past two decades, Dr Tahir has been a steadfast supporter of education, contributing to student bursaries and scholarships, as well as research and academic programmes in Singapore. In 2012, he made a landmark S$30 million gift to the NUS Yong Loo Lin School of Medicine, his largest philanthropic gift to an educational institution. The gift helped advance medical research and academic programmes.
Dr Tahir also funded key initiatives such as the Tahir NTU-Universitas Airlangga Students Exchange Programme, which strengthens academic ties between Singapore and Indonesia, as well as the NTU Priorities Fund. In 2021, Dr Tahir made a gift in support of student bursaries, providing financial assistance to deserving NTU students and enabling them to pursue their education with greater peace of mind.
Investing in regional talent and collaboration
Strategically positioned in the heart of Southeast Asia, Singapore serves as a vital hub for regional education and collaboration. As the nation’s flagship university and top-ranked institution in Southeast Asia, NUS plays a leading role in advancing research, innovation and global partnerships across the region.
Situated in Singapore’s innovation corridor, NTU is consistently ranked among the world’s top young universities and is recognised for its excellence in sustainability, engineering, and interdisciplinary research. The University plays a pivotal role in advancing technological innovation, industry collaboration and talent development, contributing to Southeast Asia’s progress and global competitiveness.
Through this latest gift from the Tahir Foundation, more Indonesian students will have the opportunity to join the vibrant, diverse and intellectually dynamic communities at NUS and NTU. This will not only shape their personal and professional growth but will also contribute to deeper regional understanding and cooperation — laying the groundwork for a stronger, more inclusive Southeast Asia.
Cambridge scholar helps bring Ukraine’s pain and power to the stage in critically acclaimed creative collaboration

The Guardian calls it ‘shattering’. The Stage heralds it as a ‘challenging, artfully constructed indictment of Russian war crimes in Ukraine.’
Written by Anastasiia Kosodii and Josephine Burton, and directed by Burton, The Reckoning channels voices of Ukrainians across the country – a priest, a volunteer, a dentist, a security guard, a journalist – who are forced to confront the sudden horrors of invasion and occupation and to repair bonds of trust amid violence and fear. These voices are real, drawn from witness statements collected and conserved by the journalists and lawyers behind The Reckoning Project.
Rory Finnin, Professor of Ukrainian Studies and a Fellow of Robinson College at Cambridge, collaborated with Burton to help shape the play. His decades of research into Ukraine’s culture and society formed the basis for a grant in support of The Reckoning from the University of Cambridge’s AHRC Impact Starter Fund account.
“Our collaboration with Rory Finnin has been invaluable throughout the making of The Reckoning,” said Burton, who is also Artistic Director and Chief Executive of Dash Arts. “Rory’s insights into Ukraine’s past and present gave me deeper grounding as a director and co-writer and helped sharpen the questions the play asks of its audience.”
The Reckoning blends dynamic storytelling with movement, music, and food to forge new routes of solidarity and understanding with the audience. As Everything Theatre notes in a glowing review, “We leave not as passive spectators but as an active part of the struggle.”
Attendees share in a summer salad made over the course of the play by the Ukrainian and British cast – Tom Godwin, Simeon Kyslyi, Marianne Oldham, and Olga Safronova – who bring empathy, humour, and integrity to each scene. The conclusion of each performance features an invited speaker from the audience who comes to the stage to reckon with their own experience of the play from different ethical and intellectual perspectives.
Professor Finnin spoke on the play’s first night at the Arcola Theatre.
“Over three years into Russia’s full-scale invasion, we are too often tempted to turn our eyes away from Ukraine,” said Finnin. “But The Reckoning empowers us to look closely and to see with new purpose. It has been an incredible privilege to support a dynamic work of art that brings Ukrainian voices to the fore and challenges us to listen and respond to them, with urgency and moral clarity.”
The Reckoning runs through 28 June at London’s Arcola Theatre.
The Reckoning is an intimate work of documentary theatre composed from a verified archive of witness testimonies chronicling Russia’s war of aggression. It is now playing at London’s Arcola Theatre to universal acclaim.
The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.
Hari Raya Haji, a pinnacle of spiritual and humanitarian struggle
By Dr Azhar Ibrahim Alwee, Senior Lecturer from the Dept of Malay Studies, Faculty of Arts and Social Sciences at NUS
Give more thought to lowering Singapore’s voting age to 18
By Asst Prof Elvin Ong, from the Dept of Political Science, Faculty of Arts and Social Sciences at NUS
Kimberle Lau wants to treat your sweet tooth, but in a healthy way
The shadow architects of power
In Washington, where conversations about Russia often center on a single name, political science doctoral candidate Suzanne Freeman is busy redrawing the map of power in autocratic states. Her research upends prevailing narratives about Vladimir Putin’s Russia, asking us to look beyond the individual to understand the system that produced him.
“The standard view is that Putin originated Russia’s system of governance and the way it engages with the world,” Freeman explains. “My contention is that Putin is a product of a system rather than its author, and that his actions are very consistent with the foreign policy beliefs of the organization in which he was educated.”
That organization — the KGB and its successor agencies — stands at the center of Freeman’s dissertation, which examines how authoritarian intelligence agencies intervene in their own states’ foreign policy decision-making processes, particularly decisions about using military force.
Dismantling the “yes men” myth
Past scholarship has relied on an oversimplified characterization of intelligence agencies in authoritarian states. “The established belief that I’m challenging is essentially that autocrats surround themselves with ‘yes’ men,” Freeman says. She notes that this narrative stems in great part from a famous Soviet failure, when intelligence officers were too afraid to contradict Stalin’s belief that Nazi Germany wouldn’t invade in 1941.
Freeman’s research reveals a far more complex reality. Through extensive archival work, including newly declassified documents from Lithuania, Moldova, and Poland, she shows that intelligence agencies in authoritarian regimes actually have distinct foreign policy preferences and actively work to advance them.
“These intelligence agencies are motivated by their organizational interests, seeking to survive and hold power inside and beyond their own borders,” Freeman says.
When an international situation threatens those interests, authoritarian intelligence agencies may intervene in the policy process using strategies Freeman has categorized in an innovative typology: indirect manipulation (altering collected intelligence), direct manipulation (misrepresenting analyzed intelligence), preemption in the field (unauthorized actions that alter a foreign crisis), and coercion (threats against political leadership).
“By intervene, I mean behaving in some way that’s inappropriate in accordance with what their mandate is,” Freeman explains. That mandate includes providing policy advice. “But sometimes intelligence agencies want to make their policy advice look more attractive by manipulating information,” she notes. “They may change the facts out on the ground, or in very rare circumstances, coerce policymakers.”
From Soviet archives to modern Russia
Rather than studying contemporary Russia alone, Freeman uses historical case studies of the Soviet Union’s KGB. Her research into this agency’s policy intervention covers eight foreign policy crises between 1950 and 1981, including uprisings in Eastern Europe, the Sino-Soviet border dispute, and the Soviet-Afghan War.
What she discovered contradicts prior assumptions that the agency was primarily a passive information provider. “The KGB had always been important for Soviet foreign policy and gave policy advice about what they thought should be done,” she says. Intelligence agencies were especially likely to pursue policy intervention when facing a “dual threat:” domestic unrest sparked by foreign crises combined with the loss of intelligence networks abroad.
This organizational motivation, rather than simply following a leader’s preferences, drove policy recommendations in predictable ways.
Freeman sees striking parallels to Russia’s recent actions in Ukraine. “This dual organizational threat closely mirrors the threat that the KGB faced in Hungary in 1956, Czechoslovakia in 1968, and Poland from 1980 to 1981,” she explains. After 2014, Ukrainian intelligence reform weakened Russian intelligence networks in the country — a serious organizational threat to Russia’s security apparatus.
“Between 2014 and 2022, this network weakened,” Freeman notes. “We know that Russian intelligence had ties with a polling firm in Ukraine, where they had data saying that 84 percent of the population would view them as occupiers, that almost half of the Ukrainian population was willing to fight for Ukraine.” In spite of these polls, officers recommended going into Ukraine anyway.
This pattern resembles the KGB’s advocacy for invading Afghanistan using the manipulation of intelligence — a parallel that helps explain Russia’s foreign policy decisions beyond just Putin’s personal preferences.
Scholarly detective work
Freeman’s research innovations have allowed her to access previously unexplored material. “From a methodological perspective, it’s new archival material, but it’s also archival material from regions of a country, not the center,” she explains.
In Moldova, she examined previously classified KGB documents: huge amounts of newly available and unstructured documents that provided details into how anti-Soviet sentiment during foreign crises affected the KGB.
Freeman’s willingness to search beyond central archives distinguishes her approach, especially valuable as direct research in Russia becomes increasingly difficult. “People who want to study Russia or the Soviet Union who are unable to get to Russia can still learn very meaningful things, even about the central state, from these other countries and regions.”
From Boston to Moscow to MIT
Freeman grew up in Boston in an academic, science-oriented family; both her parents were immunologists. Going against the grain, she was drawn to history, particularly Russian and Soviet history, beginning in high school.
“I was always curious about the Soviet Union and why it fell apart, but I never got a clear answer from my teachers,” says Freeman. “This really made me want to learn more and solve that puzzle myself."
At Columbia University, she majored in Slavic studies and completed a master’s degree at the School of International and Public Affairs. Her undergraduate thesis examined Russian military reform, a topic that gained new relevance after Russia’s 2014 invasion of Ukraine.
Before beginning her doctoral studies at MIT, Freeman worked at the Russia Maritime Studies Institute at the U.S. Naval War College, researching Russian military strategy and doctrine. There, surrounded by scholars with political science and history PhDs, she found her calling.
“I decided I wanted to be in an academic environment where I could do research that I thought would prove valuable,” she recalls.
Bridging academia and public education
Beyond her core research, Freeman has established herself as an innovator in war-gaming methodology. With fellow PhD student Benjamin Harris, she co-founded the MIT Wargaming Working Group, which has developed a partnership with the Naval Postgraduate School to bring mid-career military officers and academics together for annual simulations.
Their work on war-gaming as a pedagogical tool resulted in a peer-reviewed publication in PS: Political Science & Politics titled “Crossing a Virtual Divide: Wargaming as a Remote Teaching Tool.” This research demonstrates that war games are effective tools for active learning even in remote settings and can help bridge the civil-military divide.
When not conducting research, Freeman works as a tour guide at the International Spy Museum in Washington. “I think public education is important — plus they have a lot of really cool KGB objects,” she says. “I felt like working at the Spy Museum would help me keep thinking about my research in a more fun way and hopefully help me explain some of these things to people who aren’t academics.”
Looking beyond individual leaders
Freeman’s work offers vital insight for policymakers who too often focus exclusively on autocratic leaders, rather than the institutional systems surrounding them. “I hope to give people a new lens through which to view the way that policy is made,” she says. “The intelligence agency and the type of advice that it provides to political leadership can be very meaningful.”
As tensions with Russia continue, Freeman believes her research provides a crucial framework for understanding state behavior beyond individual personalities. “If you're going to be negotiating and competing with these authoritarian states, thinking about the leadership beyond the autocrat seems very important.”
Currently completing her dissertation as a predoctoral fellow at George Washington University’s Institute for Security and Conflict Studies, Freeman aims to contribute critical scholarship on Russia’s role in international security and inspire others to approach complex geopolitical questions with systematic research skills.
“In Russia and other authoritarian states, the intelligence system may endure well beyond a single leader’s reign,” Freeman notes. “This means we must focus not on the figures who dominate the headlines, but on the institutions that shape them.”
© Photo: Chris Burns
Bringing meaning into technology deployment
In 15 TED Talk-style presentations, MIT faculty recently discussed their pioneering research that incorporates social, ethical, and technical considerations and expertise, each supported by seed grants established by the Social and Ethical Responsibilities of Computing (SERC), a cross-cutting initiative of the MIT Schwarzman College of Computing. The call for proposals last summer was met with nearly 70 applications. A committee with representatives from every MIT school and the college convened to select the winning projects that received up to $100,000 in funding.
“SERC is committed to driving progress at the intersection of computing, ethics, and society. The seed grants are designed to ignite bold, creative thinking around the complex challenges and possibilities in this space,” said Nikos Trichakis, co-associate dean of SERC and the J.C. Penney Professor of Management. “With the MIT Ethics of Computing Research Symposium, we felt it important to not just showcase the breadth and depth of the research that’s shaping the future of ethical computing, but to invite the community to be part of the conversation as well.”
“What you’re seeing here is kind of a collective community judgment about the most exciting work when it comes to research, in the social and ethical responsibilities of computing being done at MIT,” said Caspar Hare, co-associate dean of SERC and professor of philosophy.
The full-day symposium on May 1 was organized around four key themes: responsible health-care technology, artificial intelligence governance and ethics, technology in society and civic engagement, and digital inclusion and social justice. Speakers delivered thought-provoking presentations on a broad range of topics, including algorithmic bias, data privacy, the social implications of artificial intelligence, and the evolving relationship between humans and machines. The event also featured a poster session, where student researchers showcased projects they worked on throughout the year as SERC Scholars.
Highlights from the MIT Ethics of Computing Research Symposium in each of the theme areas, many of which are available to watch on YouTube, included:
Making the kidney transplant system fairer
Policies regulating the organ transplant system in the United States are made by a national committee that often takes more than six months to create, and then years to implement, a timeline that many on the waiting list simply can’t survive.
Dimitris Bertsimas, vice provost for open learning, associate dean of business analytics, and Boeing Professor of Operations Research, shared his latest work in analytics for fair and efficient kidney transplant allocation. Bertsimas’ new algorithm examines criteria like geographic location, mortality, and age in just 14 seconds, a monumental change from the usual six hours.
Bertsimas and his team work closely with the United Network for Organ Sharing (UNOS), a nonprofit that manages most of the national donation and transplant system through a contract with the federal government. During his presentation, Bertsimas shared a video from James Alcorn, senior policy strategist at UNOS, who offered this poignant summary of the impact the new algorithm has:
“This optimization radically changes the turnaround time for evaluating these different simulations of policy scenarios. It used to take us a couple months to look at a handful of different policy scenarios, and now it takes a matter of minutes to look at thousands and thousands of scenarios. We are able to make these changes much more rapidly, which ultimately means that we can improve the system for transplant candidates much more rapidly.”
The ethics of AI-generated social media content
As AI-generated content becomes more prevalent across social media platforms, what are the implications of disclosing (or not disclosing) that any part of a post was created by AI? Adam Berinsky, Mitsui Professor of Political Science, and Gabrielle Péloquin-Skulski, PhD student in the Department of Political Science, explored this question in a session that examined recent studies on the impact of various labels on AI-generated content.
In a series of surveys and experiments affixing labels to AI-generated posts, the researchers looked at how specific words and descriptions impacted users’ perception of deception, their intent to engage with the post, and ultimately if the post was true or false.
“The big takeaway from our initial set of findings is that one size doesn’t fit all,” said Péloquin-Skulski. “We found that labeling AI-generated images with a process-oriented label reduces belief in both false and true posts. This is quite problematic, as labeling intends to reduce people’s belief in false information, not necessarily true information. This suggests that labels combining both process and veracity might be better at countering AI-generated misinformation.”
Using AI to increase civil discourse online
“Our research aims to address how people increasingly want to have a say in the organizations and communities they belong to,” Lily Tsai explained in a session on experiments in generative AI and the future of digital democracy. Tsai, Ford Professor of Political Science and director of the MIT Governance Lab, is conducting ongoing research with Alex Pentland, Toshiba Professor of Media Arts arts Sciences, and a larger team.
Online deliberative platforms have recently been rising in popularity across the United States in both public- and private-sector settings. Tsai explained that with technology, it’s now possible for everyone to have a say — but doing so can be overwhelming, or even feel unsafe. First, too much information is available, and secondly, online discourse has become increasingly “uncivil.”
The group focuses on “how we can build on existing technologies and improve them with rigorous, interdisciplinary research, and how we can innovate by integrating generative AI to enhance the benefits of online spaces for deliberation.” They have developed their own AI-integrated platform for deliberative democracy, DELiberation.io, and rolled out four initial modules. All studies have been in the lab so far, but they are also working on a set of forthcoming field studies, the first of which will be in partnership with the government of the District of Columbia.
Tsai told the audience, “If you take nothing else from this presentation, I hope that you’ll take away this — that we should all be demanding that technologies that are being developed are assessed to see if they have positive downstream outcomes, rather than just focusing on maximizing the number of users.”
A public think tank that considers all aspects of AI
When Catherine D’Ignazio, associate professor of urban science and planning, and Nikko Stevens, postdoc at the Data + Feminism Lab at MIT, initially submitted their funding proposal, they weren’t intending to develop a think tank, but a framework — one that articulated how artificial intelligence and machine learning work could integrate community methods and utilize participatory design.
In the end, they created Liberatory AI, which they describe as a “rolling public think tank about all aspects of AI.” D’Ignazio and Stevens gathered 25 researchers from a diverse array of institutions and disciplines who authored more than 20 position papers examining the most current academic literature on AI systems and engagement. They intentionally grouped the papers into three distinct themes: the corporate AI landscape, dead ends, and ways forward.
“Instead of waiting for Open AI or Google to invite us to participate in the development of their products, we’ve come together to contest the status quo, think bigger-picture, and reorganize resources in this system in hopes of a larger societal transformation,” said D’Ignazio.
© Photo: Gretchen Ertl
Bringing meaning into technology deployment
In 15 TED Talk-style presentations, MIT faculty recently discussed their pioneering research that incorporates social, ethical, and technical considerations and expertise, each supported by seed grants established by the Social and Ethical Responsibilities of Computing (SERC), a cross-cutting initiative of the MIT Schwarzman College of Computing. The call for proposals last summer was met with nearly 70 applications. A committee with representatives from every MIT school and the college convened to select the winning projects that received up to $100,000 in funding.
“SERC is committed to driving progress at the intersection of computing, ethics, and society. The seed grants are designed to ignite bold, creative thinking around the complex challenges and possibilities in this space,” said Nikos Trichakis, co-associate dean of SERC and the J.C. Penney Professor of Management. “With the MIT Ethics of Computing Research Symposium, we felt it important to not just showcase the breadth and depth of the research that’s shaping the future of ethical computing, but to invite the community to be part of the conversation as well.”
“What you’re seeing here is kind of a collective community judgment about the most exciting work when it comes to research, in the social and ethical responsibilities of computing being done at MIT,” said Caspar Hare, co-associate dean of SERC and professor of philosophy.
The full-day symposium on May 1 was organized around four key themes: responsible health-care technology, artificial intelligence governance and ethics, technology in society and civic engagement, and digital inclusion and social justice. Speakers delivered thought-provoking presentations on a broad range of topics, including algorithmic bias, data privacy, the social implications of artificial intelligence, and the evolving relationship between humans and machines. The event also featured a poster session, where student researchers showcased projects they worked on throughout the year as SERC Scholars.
Highlights from the MIT Ethics of Computing Research Symposium in each of the theme areas, many of which are available to watch on YouTube, included:
Making the kidney transplant system fairer
Policies regulating the organ transplant system in the United States are made by a national committee that often takes more than six months to create, and then years to implement, a timeline that many on the waiting list simply can’t survive.
Dimitris Bertsimas, vice provost for open learning, associate dean of business analytics, and Boeing Professor of Operations Research, shared his latest work in analytics for fair and efficient kidney transplant allocation. Bertsimas’ new algorithm examines criteria like geographic location, mortality, and age in just 14 seconds, a monumental change from the usual six hours.
Bertsimas and his team work closely with the United Network for Organ Sharing (UNOS), a nonprofit that manages most of the national donation and transplant system through a contract with the federal government. During his presentation, Bertsimas shared a video from James Alcorn, senior policy strategist at UNOS, who offered this poignant summary of the impact the new algorithm has:
“This optimization radically changes the turnaround time for evaluating these different simulations of policy scenarios. It used to take us a couple months to look at a handful of different policy scenarios, and now it takes a matter of minutes to look at thousands and thousands of scenarios. We are able to make these changes much more rapidly, which ultimately means that we can improve the system for transplant candidates much more rapidly.”
The ethics of AI-generated social media content
As AI-generated content becomes more prevalent across social media platforms, what are the implications of disclosing (or not disclosing) that any part of a post was created by AI? Adam Berinsky, Mitsui Professor of Political Science, and Gabrielle Péloquin-Skulski, PhD student in the Department of Political Science, explored this question in a session that examined recent studies on the impact of various labels on AI-generated content.
In a series of surveys and experiments affixing labels to AI-generated posts, the researchers looked at how specific words and descriptions impacted users’ perception of deception, their intent to engage with the post, and ultimately if the post was true or false.
“The big takeaway from our initial set of findings is that one size doesn’t fit all,” said Péloquin-Skulski. “We found that labeling AI-generated images with a process-oriented label reduces belief in both false and true posts. This is quite problematic, as labeling intends to reduce people’s belief in false information, not necessarily true information. This suggests that labels combining both process and veracity might be better at countering AI-generated misinformation.”
Using AI to increase civil discourse online
“Our research aims to address how people increasingly want to have a say in the organizations and communities they belong to,” Lily Tsai explained in a session on experiments in generative AI and the future of digital democracy. Tsai, Ford Professor of Political Science and director of the MIT Governance Lab, is conducting ongoing research with Alex Pentland, Toshiba Professor of Media Arts arts Sciences, and a larger team.
Online deliberative platforms have recently been rising in popularity across the United States in both public- and private-sector settings. Tsai explained that with technology, it’s now possible for everyone to have a say — but doing so can be overwhelming, or even feel unsafe. First, too much information is available, and secondly, online discourse has become increasingly “uncivil.”
The group focuses on “how we can build on existing technologies and improve them with rigorous, interdisciplinary research, and how we can innovate by integrating generative AI to enhance the benefits of online spaces for deliberation.” They have developed their own AI-integrated platform for deliberative democracy, DELiberation.io, and rolled out four initial modules. All studies have been in the lab so far, but they are also working on a set of forthcoming field studies, the first of which will be in partnership with the government of the District of Columbia.
Tsai told the audience, “If you take nothing else from this presentation, I hope that you’ll take away this — that we should all be demanding that technologies that are being developed are assessed to see if they have positive downstream outcomes, rather than just focusing on maximizing the number of users.”
A public think tank that considers all aspects of AI
When Catherine D’Ignazio, associate professor of urban science and planning, and Nikko Stevens, postdoc at the Data + Feminism Lab at MIT, initially submitted their funding proposal, they weren’t intending to develop a think tank, but a framework — one that articulated how artificial intelligence and machine learning work could integrate community methods and utilize participatory design.
In the end, they created Liberatory AI, which they describe as a “rolling public think tank about all aspects of AI.” D’Ignazio and Stevens gathered 25 researchers from a diverse array of institutions and disciplines who authored more than 20 position papers examining the most current academic literature on AI systems and engagement. They intentionally grouped the papers into three distinct themes: the corporate AI landscape, dead ends, and ways forward.
“Instead of waiting for Open AI or Google to invite us to participate in the development of their products, we’ve come together to contest the status quo, think bigger-picture, and reorganize resources in this system in hopes of a larger societal transformation,” said D’Ignazio.
© Photo: Gretchen Ertl
VitalHide puts privacy first in the age of wireless health monitoring
Why U.S. should be worried about Ukrainian attack on Russian warplanes

A drone is launched in late May in the Zaporizhzhia region of Ukraine.
Photo by Ukrinform/NurPhoto via AP
Why U.S. should be worried about Ukrainian attack on Russian warplanes
Audacious — and wildly successful — use of inexpensive drones against superior force can be used anywhere, against anyone
Christina Pazzanese
Harvard Staff Writer
Ukraine stunned Russia — and the world — when it launched Operation Spider’s Web, an audacious drone attack on June 1 that damaged or destroyed dozens of Russian warplanes. In the days since, Russian President Vladimir Putin has responded by escalating aerial assaults, launching record drone and missile attacks on Ukraine.
Beyond its secrecy and complexity, military analysts say Ukraine’s remarkable success using inexpensive homemade drones against a larger, more formidable adversary ushers modern warfare into a new and potentially troubling era.
In this edited conversation, the Gazette spoke with Eric Rosenbach, a senior lecturer at Harvard Kennedy School and past executive co-director at the Belfer Center, about how drones are rapidly reshaping global conflicts. Rosenbach, a former Army intelligence officer, was chief of staff to U.S. Secretary of Defense Ashton B. Carter from 2015-2017, a job in which he advised on Russia policy and led efforts to improve innovation at the Defense Department.
Beyond its success, what was significant about Operation Spider’s Web?
I think the most significant thing is that it showed Ukraine’s ability to reach deep into Russia and to hit targets that have a very high level of strategic significance. The use of drone technology itself was important, but it was more about the fact that they were able to project power in a way that I’m sure deeply impacted Putin.
So not only the targets they hit, but also that Ukraine was met with no resistance from the Russians?
Exactly. The targets that they hit, they’re called strategic aircraft. Those are aircraft that are used to deliver nuclear weapons. They were also the type of aircraft that were used by the Russians to launch many of the high-end missile attacks against Ukraine for some of their hypersonic and long-range cruise missiles. So symbolically, that was super important but also had an important operational effect.
And if you look at some of the details about how the Ukrainians must have done this, it’s amazing. It will be an amazing spy thriller movie to watch the Ukrainians smuggling in drones over borders, probably loading them into trucks, driving the trucks probably within, I bet, five to 10 kilometers of these military bases, launching the drones through the roofs of these cargo trucks and then piloting them in to hit the right targets. They must have done an enormous amount of advanced intelligence work to pull this off.
Many observers say this was a watershed event that suggests we’ve entered a new era of modern warfare. Do you agree?
Yes. I think it’s important to look at the sophistication of the Ukrainian drone program and how it has evolved. In the very beginning of the war, they were using a lot of off-the-shelf drone technology, wiring some munitions to them, using those then to attack Russian tanks or infantry formations.
Now, it’s quite different: They’re designing their own drones, using a global supply chain to get the parts, including from China, and then doing things like 3D printing the drones themselves. The production line produces numbers that, quite frankly, are much higher than what the U.S. is producing right now or probably could produce.
Also, they’re really pushing forward the capabilities of these weapons in terms of what they can do, and how they’re utilized. And part of that is based on the artificial intelligence that they have been able to develop, and the data. Some of it is also just learning on the battlefield.
“What would really start to worry you is if there were drones that were very long-range and had full autonomy.”
What is — or should be — most worrisome for the U.S., for NATO countries, and others whose drone programs lag behind Ukraine and Russia?
Three points here, I would say. One is that the Taiwanese are very closely studying what the Ukrainians have done and will do in the future in terms of using pretty inexpensive autonomous weapons for both defense and offense. In particular, the way they hit deep inside Russia.
Xi Jinping has said that Taiwan will become part of China sometime — some people say by 2027. I doubt that, but there’s a really important reason that the Taiwanese would want the ability to strike deep with China, if the [People’s Liberation Army] launched a military action against the island. I just wrote a report on that a few months ago that talks about how the Taiwanese could learn from the Ukrainians to develop their own type of autonomous weapons.
I would say the second is the Europeans are very nervous about how advanced the Russians have become with these autonomous weapons. They ask, for example, “What if the Russians wanted to try to mount some type of limited, small incursion into one of the Baltic states to test Article 5 of NATO and the way that they were doing it was, in large part, through the use of autonomous weapons?” The European defense technology sector is not very well developed, less so than the United States, and far behind Ukraine. I think the Europeans recognize that.
For the U.S., there’s a pretty jarring homeland security takeaway from this, which is: Imagine there are some bad actors — it wouldn’t even have to be a nation state, but it could be a terrorist group — that decides they’re going to 3D print these within the United States, or go across the border, and mimic the Ukrainians with a high-profile attack. I know a lot of people recognize that, but this should really drive home that the U.S. is vulnerable to attacks like this.
How vulnerable?
It’s getting better. If you look at high-profile public events, they’re called national security events. There’s quite good technology in place to protect the president, for example, when he’s out and about, the Super Bowl, things that would be logical opportunities to attack.
It’s the lower-profile things that are a much softer target that still could have a big impact both on the American psyche and, probably, the economy.
What does the U.S. need to do to prevent an attack like this?
The United States will always be vulnerable to some degree from these attacks that are based on newer technology, whether it’s a cyberattack, there could be a space-based attack, and attack from autonomous weapons. The risk will never be zero.
So that’s also important to recognize when you read, for example, about a “golden dome” that will protect everything in the country. It’s just not realistic to think that we’ll ever be zero risk, fully protected from some magical defense technology.
So that means that we probably do need to invest more in counter-UAV (Unmanned Aerial Vehicle) homeland defense. And you see in the headlines of the last couple days that the U.S. is pulling more of the support we’ve given to Ukraine back to protect Americans, whether in the Middle East or even in the homeland, to try to do that.
How close are we to having something effective?
To a limited degree, there are pretty effective counter-UAV systems that have been developed, but they’re in very limited numbers to get up to the level that we would rest easier. I think it’s a years-not-months type of scenario.
What’s the potential global fallout from this demonstration by the Ukrainians?
From a geopolitical perspective, I think it makes clear that a peace agreement is nowhere near in the future. I’m sure the Russian response will be very heavy. The way Putin and the Russians have always reacted to something like this is with an overwhelming response of force. So that will be unfortunate. It probably will be one of the worst attacks we’ve seen against Ukraine from a technology perspective.
One thing to recognize about this is, although the operation was sophisticated, the drones were not fully autonomous. They were not completely reliant on AI, and they didn’t travel 5,000 miles. It was still a local-based operation with a pilot operating them to do the targeting. What would really start to worry you is if there were drones that were very long-range and had full autonomy — they were doing the identification, selection, and targeting of targets on their own, without a human in the loop.
Why would that be more worrisome?
The range is a significant limiting factor right now, both in defense and offense, when you’re using autonomous weapons. Think about the U.S., for example: If someone drove a boat just off the coast of the United States, and didn’t even have to go through the border. They had produced and put everything together there, and they could get even a couple hundred miles range, you could see how a lot of people in Washington, D.C., or other major metropolitan areas would be very vulnerable.
As I mentioned, true fully autonomous weapons would mean that a terrorist or nation-state could simply program targets, launch the killer drones, and then escape. Because that technology is still several years from being fully mature, the likelihood of this right now is low. The technology isn’t fully developed, hasn’t even really been tested extensively.
There is one thing to worry about: Throughout history, technologies that are lethal but have not been subject to a lot of testing end up having unintended consequences that sometimes are worse because of the catalytic effects in generating new classes of weapons.
Remember when corporate America steered clear of politics on social media?
Remember when corporate America steered clear of politics on social media?

Elisabeth Kempf.
Stephanie Mitchell/Harvard Staff Photographer
Christina Pazzanese
Harvard Staff Writer
Study finds Twitter surge starting in 2017, most of it Democratic-leaning by surprising range of firms, with negative effects on stock price
There was a time when corporate America was not very online. Most companies used social media for promoting products and services or engaging with consumers in a friendly fashion. Political posts on a company Twitter account were rare.
That all changed between 2012 and 2022, when the volume of partisan speech on Twitter (now called X) from large corporations surged, more than doubling beginning in 2017, according to a new National Bureau of Economic Research working paper.
Researchers said the spike was driven disproportionately by companies using language associated with Democratic politicians. The moves frequently had negative effects on company stock prices.
In this edited conversation, the paper’s co-author Elisabeth Kempf, Jaime and Raquel Gilinski Associate Professor of Business Administration, explains corporate Twitter’s abrupt shift.
It seems ubiquitous now, but corporations putting out partisan-sounding tweets is only a recent development. What did you find?
Partisan speech was very rare for companies. Less than 1 percent of all the tweets they sent between 2012 and 2017 would constitute what was, according to our measures, very partisan speech.
Wading into partisan, polarized issues can be tricky for companies. We saw the first big change in 2017 where both Democratic and Republican partisan speech picks up, and then this big asymmetry starting in 2019.
That was the part that I found the most surprising. I was expecting to see some companies also start to use more Republican-sounding speech — to see, essentially, evidence of polarization. It was quite surprising to see that everybody seemed to adopt more Democratic speech after 2019 and that it was happening across the board, companies in blue states, red states, companies that are in consumer-facing businesses, also B2B [business to business].
“It was quite surprising to see that everybody seemed to adopt more Democratic speech after 2019 and that it was happening across the board, companies in blue states, red states.”
How did you define partisan corporate speech?
We built on previous work by Jesse Shapiro, my colleague at Harvard, and his co-authors. They developed this methodology to look at partisan speech in Congress that identifies phrases that allow you to correctly guess a speaker’s political party.
We used their methodology and applied it to tweets sent by Democratic and Republican politicians. That allowed us to identify highly partisan phrases and then apply it to corporate speech. Essentially, phrases that sound like they could be coming from a Democratic or Republican politician is what we consider partisan speech.
What did you learn about how partisan corporate speech has changed over time?
The first fact is that partisan corporate speech has become more common over this time period, 2012 to 2022. The second fact is that this growth was really asymmetric. Starting in 2019, we see rapid growth in Democratic-sounding speech, while Republican-sounding speech stayed constant or, if anything, decreased toward the end. And third, when we look at stock returns around these partisan corporate statements, we see that they tend to be followed by negative abnormal stock returns.
That said, we also see a lot of heterogeneity depending on the investor composition. For example, if there are more funds with ESG [environmental, social, and governance] objectives, then we see that the stock return after a Democratic-sounding statement tends to be less negative.
Corporate partisan tweets over time

Is it clear why these partisan Twitter statements suddenly escalated during this time?
In 2017, we see both Democratic- and Republican-sounding speech go up for the first time. We don’t have so much to say on that.
What we do have more to say on is when we start to see this divergence between Democratic- and Republican-sounding speech, which is in January 2019. That was when BlackRock chief executive Larry Fink put out a pretty influential letter, his “Dear CEOs” annual letter. He explicitly called on CEOs to be more vocal, take more stances on polarizing issues.
This would be another potential piece of evidence to suggest that large institutional investors may have also played a role in this. BlackRock is the world’s largest asset manager so there was a lot of news reporting that this had kicked off a lot of discussion in the business world.
Also in 2019, the Business Roundtable [a CEO lobbying association] came out saying shareholder value maximization should not be the sole purpose. So, I do think it was quite influential.
We also see that 2019 was a massive growth in terms of the assets under management with a sustainability mandate. This is when a big shift happened in the investment industry.
Recently, we’ve seen protest campaigns against Tesla and Bud Light spurred by perceived partisan corporate statements. Could robust consumer boycotts that threaten profits have contributed to declines in shares as opposed to just public or investor reaction to statements?
This was the finding we struggled the most to make sense of. One possibility is, as you say, that maybe there could be facts of this partisan speech that affect what we would call the cash flows in finance or the profits that the companies make. For example, it might be losing employees or losing customers. We argue that partisan speech can affect a third potential stakeholder group — investors — and how much they are willing to hold the stock.
We were looking at the 500 largest companies by market capitalization, so these are very big companies that need to raise capital from a pretty heterogeneous investor base. It’s hard for these large companies to raise capital only from Democrats or only from Republicans. And so, once you make a partisan statement that aligns with one group but not with the other, you might see precisely this negative stock price effect.
It is also difficult to explain the growth in Democratic-sounding speech with consumer or employee preferences alone. To be clear: We wouldn’t want to conclude that investor or consumer preferences did not play a role at all. But we thought it was worth pointing out that it was hard to explain everything just with consumers or just with employees.
Why is that? Because we see this rapid growth both for companies that are in consumer and non-consumer-facing industries. Boycotts could explain why maybe certain retail companies might adopt a certain speech, but we see it even in materials, which is a sector with companies that make metals, chemicals, coatings and things like this. It’s hard to think that there the consumer preferences for partisan speech would be super strong.
I thought employees could be quite relevant, but we saw the same trend in companies with employees located in very Democratic areas versus Republican areas. We also didn’t see that labor market tightness played a role. You might imagine that if you’re really in desperate need of talent, maybe you would engage more in this partisan speech if you thought it would help in hiring.
There were a couple of results that just made it hard to explain this with consumers or employees alone. Whereas, if you think about this investment channel, it could explain why we see it in companies that are based in blue areas, red areas. It could explain why this is such a broad-based phenomenon.
Are there other aspects of corporate speech ripe for further research?
Yes. I don’t think we have fully answered the question of why this big shift was happening then. We have suggestive evidence that investors could have played a role. But I don’t think there’s a full, causal relationship yet. So that is one big open question: How influential were investors versus consumers versus employees?
The second question is what were the more long-term effects? We’re looking at stock prices over a relatively short window. What does this do in the longer run to companies, and potentially also to their relationship to politicians or how partisan speech by companies influences politics, I think, are exciting areas for future research.
Photonic processor could streamline 6G wireless signal processing
As more connected devices demand an increasing amount of bandwidth for tasks like teleworking and cloud computing, it will become extremely challenging to manage the finite amount of wireless spectrum available for all users to share.
Engineers are employing artificial intelligence to dynamically manage the available wireless spectrum, with an eye toward reducing latency and boosting performance. But most AI methods for classifying and processing wireless signals are power-hungry and can’t operate in real-time.
Now, MIT researchers have developed a novel AI hardware accelerator that is specifically designed for wireless signal processing. Their optical processor performs machine-learning computations at the speed of light, classifying wireless signals in a matter of nanoseconds.
The photonic chip is about 100 times faster than the best digital alternative, while converging to about 95 percent accuracy in signal classification. The new hardware accelerator is also scalable and flexible, so it could be used for a variety of high-performance computing applications. At the same time, it is smaller, lighter, cheaper, and more energy-efficient than digital AI hardware accelerators.
The device could be especially useful in future 6G wireless applications, such as cognitive radios that optimize data rates by adapting wireless modulation formats to the changing wireless environment.
By enabling an edge device to perform deep-learning computations in real-time, this new hardware accelerator could provide dramatic speedups in many applications beyond signal processing. For instance, it could help autonomous vehicles make split-second reactions to environmental changes or enable smart pacemakers to continuously monitor the health of a patient’s heart.
“There are many applications that would be enabled by edge devices that are capable of analyzing wireless signals. What we’ve presented in our paper could open up many possibilities for real-time and reliable AI inference. This work is the beginning of something that could be quite impactful,” says Dirk Englund, a professor in the MIT Department of Electrical Engineering and Computer Science, principal investigator in the Quantum Photonics and Artificial Intelligence Group and the Research Laboratory of Electronics (RLE), and senior author of the paper.
He is joined on the paper by lead author Ronald Davis III PhD ’24; Zaijun Chen, a former MIT postdoc who is now an assistant professor at the University of Southern California; and Ryan Hamerly, a visiting scientist at RLE and senior scientist at NTT Research. The research appears today in Science Advances.
Light-speed processing
State-of-the-art digital AI accelerators for wireless signal processing convert the signal into an image and run it through a deep-learning model to classify it. While this approach is highly accurate, the computationally intensive nature of deep neural networks makes it infeasible for many time-sensitive applications.
Optical systems can accelerate deep neural networks by encoding and processing data using light, which is also less energy intensive than digital computing. But researchers have struggled to maximize the performance of general-purpose optical neural networks when used for signal processing, while ensuring the optical device is scalable.
By developing an optical neural network architecture specifically for signal processing, which they call a multiplicative analog frequency transform optical neural network (MAFT-ONN), the researchers tackled that problem head-on.
The MAFT-ONN addresses the problem of scalability by encoding all signal data and performing all machine-learning operations within what is known as the frequency domain — before the wireless signals are digitized.
The researchers designed their optical neural network to perform all linear and nonlinear operations in-line. Both types of operations are required for deep learning.
Thanks to this innovative design, they only need one MAFT-ONN device per layer for the entire optical neural network, as opposed to other methods that require one device for each individual computational unit, or “neuron.”
“We can fit 10,000 neurons onto a single device and compute the necessary multiplications in a single shot,” Davis says.
The researchers accomplish this using a technique called photoelectric multiplication, which dramatically boosts efficiency. It also allows them to create an optical neural network that can be readily scaled up with additional layers without requiring extra overhead.
Results in nanoseconds
MAFT-ONN takes a wireless signal as input, processes the signal data, and passes the information along for later operations the edge device performs. For instance, by classifying a signal’s modulation, MAFT-ONN would enable a device to automatically infer the type of signal to extract the data it carries.
One of the biggest challenges the researchers faced when designing MAFT-ONN was determining how to map the machine-learning computations to the optical hardware.
“We couldn’t just take a normal machine-learning framework off the shelf and use it. We had to customize it to fit the hardware and figure out how to exploit the physics so it would perform the computations we wanted it to,” Davis says.
When they tested their architecture on signal classification in simulations, the optical neural network achieved 85 percent accuracy in a single shot, which can quickly converge to more than 99 percent accuracy using multiple measurements. MAFT-ONN only required about 120 nanoseconds to perform entire process.
“The longer you measure, the higher accuracy you will get. Because MAFT-ONN computes inferences in nanoseconds, you don’t lose much speed to gain more accuracy,” Davis adds.
While state-of-the-art digital radio frequency devices can perform machine-learning inference in a microseconds, optics can do it in nanoseconds or even picoseconds.
Moving forward, the researchers want to employ what are known as multiplexing schemes so they could perform more computations and scale up the MAFT-ONN. They also want to extend their work into more complex deep learning architectures that could run transformer models or LLMs.
This work was funded, in part, by the U.S. Army Research Laboratory, the U.S. Air Force, MIT Lincoln Laboratory, Nippon Telegraph and Telephone, and the National Science Foundation.
© Credit: Sampson Wilcox, Research Laboratory of Electronics
A step in fight against tick-borne disease

Photos by Stephanie Mitchell/Harvard Staff
A step in fight against tick-borne disease
New molecular method differentiates sexes, reveals whether females have mated
Clea Simon
Harvard Correspondent
Ticks pose a grave risk to public health, with nearly half a million cases of the tick-borne Lyme disease treated every year in the United States.
Young nymph and adult female ticks typically pose the greatest risk for transmitting infection to humans. But, researchers say, there is much that is unknown about the sexual biology of ticks, knowledge that would prove useful in control efforts.
A new paper published in the Journal of Medical Entomology marks a major stride forward, chronicling a groundbreaking molecular method that differentiates male and female blacklegged ticks (commonly called deer ticks) and also reveals whether these arachnids have mated.
Lyme is perhaps the best-known disease passed by ticks, but the bacterium behind that malady is just one of several associated with them, explained Isobel Ronai, a Life Sciences Research Foundation Post-doctoral Fellow of HHMI in the Department of Organismic and Evolutionary Biology and a primary author of the paper.
Citing other tick-borne diseases, such as babesiosis, Ronai pointed out that “ticks have a huge public health importance here in the United States in terms of the disease burden.”
“Ticks have a huge public health importance here in the United States in terms of the disease burden.”
Isobel Ronai
The risk, according to the Centers for Disease Control, is increasing.
“The number of cases of diseases transmitted by ticks and mosquitoes has increased significantly in the U.S. over the last 25 years, with tick-borne diseases now accounting for over 80 percent of all vector-borne disease cases reported each year,” said C. Ben Beard, principal deputy director of the CDC’s division of vector borne-diseases, who called for better “research aimed at better understanding tick reproductive biology.”
Ronai worked with her long-term collaborators at the University of Georgia, who had “a unique data set of tick genomes,” that is, the DNA sequence of blacklegged ticks from across the country. Together they developed a molecular test to determine whether individual ticks were male or female.
In addition to being able to sex the ticks, Ronai investigated “interesting results” in females collected in New York that had the marker for male DNA. By mating other ticks in the lab, she was able to determine that the marker could also be used to identify female ticks that had mated.

Ticks have a complex life cycle.
“They feed at multiple stages,” explained Ronai. “In mosquitoes only the adult stages take a blood meal, whereas in the ticks they feed at three stages throughout their life cycle.”
The ticks begin life as eggs, from which emerge larvae. Those larvae feed on a host before transitioning to their next stage, which is called a nymph. The nymphs also feed on a host.
“They take a blood meal and use that blood to progress to the adult stage,” Ronai said. “And then, at the adult stage, the female ticks feed to produce their egg clutch to start the next generation.”
“The overall sexual biology of the ticks is an area where we have a lot to learn at the molecular level.”
Isobel Ronai
Much is still unknown.
“I am very interested in our sexing assay being used to investigate whether there is any association between the sex of a larva or nymph and what hosts they’re feeding on,” said Ronai. From this behavior, we can determine “what potential microbes that cause disease male and female ticks are picking up from hosts and transmitting to humans.”
For her own research in the Extavour lab group at Harvard, the way forward is clear.
“I’m excited to explore what is happening with the biology at these immature stages,” said Ronai. “The overall sexual biology of the ticks is an area where we have a lot to learn at the molecular level.”
Then, added Ronai, “we can get to the stage of developing new control strategies for ticks that target them specifically.”
How trace elements are recycled in the deep sea
Have a damaged painting? Restore it in just hours with an AI-generated “mask”
Art restoration takes steady hands and a discerning eye. For centuries, conservators have restored paintings by identifying areas needing repair, then mixing an exact shade to fill in one area at a time. Often, a painting can have thousands of tiny regions requiring individual attention. Restoring a single painting can take anywhere from a few weeks to over a decade.
In recent years, digital restoration tools have opened a route to creating virtual representations of original, restored works. These tools apply techniques of computer vision, image recognition, and color matching, to generate a “digitally restored” version of a painting relatively quickly.
Still, there has been no way to translate digital restorations directly onto an original work, until now. In a paper appearing today in the journal Nature, Alex Kachkine, a mechanical engineering graduate student at MIT, presents a new method he’s developed to physically apply a digital restoration directly onto an original painting.
The restoration is printed on a very thin polymer film, in the form of a mask that can be aligned and adhered to an original painting. It can also be easily removed. Kachkine says that a digital file of the mask can be stored and referred to by future conservators, to see exactly what changes were made to restore the original painting.
“Because there’s a digital record of what mask was used, in 100 years, the next time someone is working with this, they’ll have an extremely clear understanding of what was done to the painting,” Kachkine says. “And that’s never really been possible in conservation before.”
As a demonstration, he applied the method to a highly damaged 15th century oil painting. The method automatically identified 5,612 separate regions in need of repair, and filled in these regions using 57,314 different colors. The entire process, from start to finish, took 3.5 hours, which he estimates is about 66 times faster than traditional restoration methods.
Kachkine acknowledges that, as with any restoration project, there are ethical issues to consider, in terms of whether a restored version is an appropriate representation of an artist’s original style and intent. Any application of his new method, he says, should be done in consultation with conservators with knowledge of a painting’s history and origins.
“There is a lot of damaged art in storage that might never be seen,” Kachkine says. “Hopefully with this new method, there’s a chance we’ll see more art, which I would be delighted by.”
Digital connections
The new restoration process started as a side project. In 2021, as Kachkine made his way to MIT to start his PhD program in mechanical engineering, he drove up the East Coast and made a point to visit as many art galleries as he could along the way.
“I’ve been into art for a very long time now, since I was a kid,” says Kachkine, who restores paintings as a hobby, using traditional hand-painting techniques. As he toured galleries, he came to realize that the art on the walls is only a fraction of the works that galleries hold. Much of the art that galleries acquire is stored away because the works are aged or damaged, and take time to properly restore.
“Restoring a painting is fun, and it’s great to sit down and infill things and have a nice evening,” Kachkine says. “But that’s a very slow process.”
As he has learned, digital tools can significantly speed up the restoration process. Researchers have developed artificial intelligence algorithms that quickly comb through huge amounts of data. The algorithms learn connections within this visual data, which they apply to generate a digitally restored version of a particular painting, in a way that closely resembles the style of an artist or time period. However, such digital restorations are usually displayed virtually or printed as stand-alone works and cannot be directly applied to retouch original art.
“All this made me think: If we could just restore a painting digitally, and effect the results physically, that would resolve a lot of pain points and drawbacks of a conventional manual process,” Kachkine says.
“Align and restore”
For the new study, Kachkine developed a method to physically apply a digital restoration onto an original painting, using a 15th-century painting that he acquired when he first came to MIT. His new method involves first using traditional techniques to clean a painting and remove any past restoration efforts.
“This painting is almost 600 years old and has gone through conservation many times,” he says. “In this case there was a fair amount of overpainting, all of which has to be cleaned off to see what’s actually there to begin with.”
He scanned the cleaned painting, including the many regions where paint had faded or cracked. He then used existing artificial intelligence algorithms to analyze the scan and create a virtual version of what the painting likely looked like in its original state.
Then, Kachkine developed software that creates a map of regions on the original painting that require infilling, along with the exact colors needed to match the digitally restored version. This map is then translated into a physical, two-layer mask that is printed onto thin polymer-based films. The first layer is printed in color, while the second layer is printed in the exact same pattern, but in white.
“In order to fully reproduce color, you need both white and color ink to get the full spectrum,” Kachkine explains. “If those two layers are misaligned, that’s very easy to see. So I also developed a few computational tools, based on what we know of human color perception, to determine how small of a region we can practically align and restore.”
Kachkine used high-fidelity commercial inkjets to print the mask’s two layers, which he carefully aligned and overlaid by hand onto the original painting and adhered with a thin spray of conventional varnish. The printed films are made from materials that can be easily dissolved with conservation-grade solutions, in case conservators need to reveal the original, damaged work. The digital file of the mask can also be saved as a detailed record of what was restored.
For the painting that Kachkine used, the method was able to fill in thousands of losses in just a few hours. “A few years ago, I was restoring this baroque Italian painting with probably the same order magnitude of losses, and it took me nine months of part-time work,” he recalls. “The more losses there are, the better this method is.”
He estimates that the new method can be orders of magnitude faster than traditional, hand-painted approaches. If the method is adopted widely, he emphasizes that conservators should be involved at every step in the process, to ensure that the final work is in keeping with an artist’s style and intent.
“It will take a lot of deliberation about the ethical challenges involved at every stage in this process to see how can this be applied in a way that’s most consistent with conservation principles,” he says. “We’re setting up a framework for developing further methods. As others work on this, we’ll end up with methods that are more precise.”
This work was supported, in part, by the John O. and Katherine A. Lutz Memorial Fund. The research was carried out, in part, through the use of equipment and facilities at MIT.Nano, with additional support from the MIT Microsystems Technology Laboratories, the MIT Department of Mechanical Engineering, and the MIT Libraries.
© Credit: Courtesy of the researchers
Study provides new insights into genetic complexity of metastasis
Window-sized device taps the air for safe drinking water
Today, 2.2 billion people in the world lack access to safe drinking water. In the United States, more than 46 million people experience water insecurity, living with either no running water or water that is unsafe to drink. The increasing need for drinking water is stretching traditional resources such as rivers, lakes, and reservoirs.
To improve access to safe and affordable drinking water, MIT engineers are tapping into an unconventional source: the air. The Earth’s atmosphere contains millions of billions of gallons of water in the form of vapor. If this vapor can be efficiently captured and condensed, it could supply clean drinking water in places where traditional water resources are inaccessible.
With that goal in mind, the MIT team has developed and tested a new atmospheric water harvester and shown that it efficiently captures water vapor and produces safe drinking water across a range of relative humidities, including dry desert air.
The new device is a black, window-sized vertical panel, made from a water-absorbent hydrogel material, enclosed in a glass chamber coated with a cooling layer. The hydrogel resembles black bubble wrap, with small dome-shaped structures that swell when the hydrogel soaks up water vapor. When the captured vapor evaporates, the domes shrink back down in an origami-like transformation. The evaporated vapor then condenses on the the glass, where it can flow down and out through a tube, as clean and drinkable water.
The system runs entirely on its own, without a power source, unlike other designs that require batteries, solar panels, or electricity from the grid. The team ran the device for over a week in Death Valley, California — the driest region in North America. Even in very low-humidity conditions, the device squeezed drinking water from the air at rates of up to 160 milliliters (about two-thirds of a cup) per day.
The team estimates that multiple vertical panels, set up in a small array, could passively supply a household with drinking water, even in arid desert environments. What’s more, the system’s water production should increase with humidity, supplying drinking water in temperate and tropical climates.
“We have built a meter-scale device that we hope to deploy in resource-limited regions, where even a solar cell is not very accessible,” says Xuanhe Zhao, the Uncas and Helen Whitaker Professor of Mechanical Engineering and Civil and Environmental Engineering at MIT. “It’s a test of feasibility in scaling up this water harvesting technology. Now people can build it even larger, or make it into parallel panels, to supply drinking water to people and achieve real impact.”
Zhao and his colleagues present the details of the new water harvesting design in a paper appearing today in the journal Nature Water. The study’s lead author is former MIT postdoc “Will” Chang Liu, who is currently an assistant professor at the National University of Singapore (NUS). MIT co-authors include Xiao-Yun Yan, Shucong Li, and Bolei Deng, along with collaborators from multiple other institutions.
Carrying capacity
Hydrogels are soft, porous materials that are made mainly from water and a microscopic network of interconnecting polymer fibers. Zhao’s group at MIT has primarily explored the use of hydrogels in biomedical applications, including adhesive coatings for medical implants, soft and flexible electrodes, and noninvasive imaging stickers.
“Through our work with soft materials, one property we know very well is the way hydrogel is very good at absorbing water from air,” Zhao says.
Researchers are exploring a number of ways to harvest water vapor for drinking water. Among the most efficient so far are devices made from metal-organic frameworks, or MOFs — ultra-porous materials that have also been shown to capture water from dry desert air. But the MOFs do not swell or stretch when absorbing water, and are limited in vapor-carrying capacity.
Water from air
The group’s new hydrogel-based water harvester addresses another key problem in similar designs. Other groups have designed water harvesters out of micro- or nano-porous hydrogels. But the water produced from these designs can be salty, requiring additional filtering. Salt is a naturally absorbent material, and researchers embed salts — typically, lithium chloride — in hydrogel to increase the material’s water absorption. The drawback, however, is that this salt can leak out with the water when it is eventually collected.
The team’s new design significantly limits salt leakage. Within the hydrogel itself, they included an extra ingredient: glycerol, a liquid compound that naturally stabilizes salt, keeping it within the gel rather than letting it crystallize and leak out with the water. The hydrogel itself has a microstructure that lacks nanoscale pores, which further prevents salt from escaping the material. The salt levels in the water they collected were below the standard threshold for safe drinking water, and significantly below the levels produced by many other hydrogel-based designs.
In addition to tuning the hydrogel’s composition, the researchers made improvements to its form. Rather than keeping the gel as a flat sheet, they molded it into a pattern of small domes resembling bubble wrap, that act to increase the gel’s surface area, along with the amount of water vapor it can absorb.
The researchers fabricated a half-square-meter of hydrogel and encased the material in a window-like glass chamber. They coated the exterior of the chamber with a special polymer film, which helps to cool the glass and stimulates any water vapor in the hydrogel to evaporate and condense onto the glass. They installed a simple tubing system to collect the water as it flows down the glass.
In November 2023, the team traveled to Death Valley, California, and set up the device as a vertical panel. Over seven days, they took measurements as the hydrogel absorbed water vapor during the night (the time of day when water vapor in the desert is highest). In the daytime, with help from the sun, the harvested water evaporated out from the hydrogel and condensed onto the glass.
Over this period, the device worked across a range of humidities, from 21 to 88 percent, and produced between 57 and 161.5 milliliters of drinking water per day. Even in the driest conditions, the device harvested more water than other passive and some actively powered designs.
“This is just a proof-of-concept design, and there are a lot of things we can optimize,” Liu says. “For instance, we could have a multipanel design. And we’re working on a next generation of the material to further improve its intrinsic properties.”
“We imagine that you could one day deploy an array of these panels, and the footprint is very small because they are all vertical,” says Zhao, who has plans to further test the panels in many resource-limited regions. “Then you could have many panels together, collecting water all the time, at household scale.”
This work was supported, in part, by the MIT J-WAFS Water and Food Seed Grant, the MIT-Chinese University of Hong Kong collaborative research program, and the UM6P-MIT collaborative research program.
© Credit: Courtesy of the researchers
How the brain solves complicated problems
The human brain is very good at solving complicated problems. One reason for that is that humans can break problems apart into manageable subtasks that are easy to solve one at a time.
This allows us to complete a daily task like going out for coffee by breaking it into steps: getting out of our office building, navigating to the coffee shop, and once there, obtaining the coffee. This strategy helps us to handle obstacles easily. For example, if the elevator is broken, we can revise how we get out of the building without changing the other steps.
While there is a great deal of behavioral evidence demonstrating humans’ skill at these complicated tasks, it has been difficult to devise experimental scenarios that allow precise characterization of the computational strategies we use to solve problems.
In a new study, MIT researchers have successfully modeled how people deploy different decision-making strategies to solve a complicated task — in this case, predicting how a ball will travel through a maze when the ball is hidden from view. The human brain cannot perform this task perfectly because it is impossible to track all of the possible trajectories in parallel, but the researchers found that people can perform reasonably well by flexibly adopting two strategies known as hierarchical reasoning and counterfactual reasoning.
The researchers were also able to determine the circumstances under which people choose each of those strategies.
“What humans are capable of doing is to break down the maze into subsections, and then solve each step using relatively simple algorithms. Effectively, when we don’t have the means to solve a complex problem, we manage by using simpler heuristics that get the job done,” says Mehrdad Jazayeri, a professor of brain and cognitive sciences, a member of MIT’s McGovern Institute for Brain Research, an investigator at the Howard Hughes Medical Institute, and the senior author of the study.
Mahdi Ramadan PhD ’24 and graduate student Cheng Tang are the lead authors of the paper, which appears today in Nature Human Behavior. Nicholas Watters PhD ’25 is also a co-author.
Rational strategies
When humans perform simple tasks that have a clear correct answer, such as categorizing objects, they perform extremely well. When tasks become more complex, such as planning a trip to your favorite cafe, there may no longer be one clearly superior answer. And, at each step, there are many things that could go wrong. In these cases, humans are very good at working out a solution that will get the task done, even though it may not be the optimal solution.
Those solutions often involve problem-solving shortcuts, or heuristics. Two prominent heuristics humans commonly rely on are hierarchical and counterfactual reasoning. Hierarchical reasoning is the process of breaking down a problem into layers, starting from the general and proceeding toward specifics. Counterfactual reasoning involves imagining what would have happened if you had made a different choice. While these strategies are well-known, scientists don’t know much about how the brain decides which one to use in a given situation.
“This is really a big question in cognitive science: How do we problem-solve in a suboptimal way, by coming up with clever heuristics that we chain together in a way that ends up getting us closer and closer until we solve the problem?” Jazayeri says.
To overcome this, Jazayeri and his colleagues devised a task that is just complex enough to require these strategies, yet simple enough that the outcomes and the calculations that go into them can be measured.
The task requires participants to predict the path of a ball as it moves through four possible trajectories in a maze. Once the ball enters the maze, people cannot see which path it travels. At two junctions in the maze, they hear an auditory cue when the ball reaches that point. Predicting the ball’s path is a task that is impossible for humans to solve with perfect accuracy.
“It requires four parallel simulations in your mind, and no human can do that. It’s analogous to having four conversations at a time,” Jazayeri says. “The task allows us to tap into this set of algorithms that the humans use, because you just can’t solve it optimally.”
The researchers recruited about 150 human volunteers to participate in the study. Before each subject began the ball-tracking task, the researchers evaluated how accurately they could estimate timespans of several hundred milliseconds, about the length of time it takes the ball to travel along one arm of the maze.
For each participant, the researchers created computational models that could predict the patterns of errors that would be seen for that participant (based on their timing skill) if they were running parallel simulations, using hierarchical reasoning alone, counterfactual reasoning alone, or combinations of the two reasoning strategies.
The researchers compared the subjects’ performance with the models’ predictions and found that for every subject, their performance was most closely associated with a model that used hierarchical reasoning but sometimes switched to counterfactual reasoning.
That suggests that instead of tracking all the possible paths that the ball could take, people broke up the task. First, they picked the direction (left or right), in which they thought the ball turned at the first junction, and continued to track the ball as it headed for the next turn. If the timing of the next sound they heard wasn’t compatible with the path they had chosen, they would go back and revise their first prediction — but only some of the time.
Switching back to the other side, which represents a shift to counterfactual reasoning, requires people to review their memory of the tones that they heard. However, it turns out that these memories are not always reliable, and the researchers found that people decided whether to go back or not based on how good they believed their memory to be.
“People rely on counterfactuals to the degree that it’s helpful,” Jazayeri says. “People who take a big performance loss when they do counterfactuals avoid doing them. But if you are someone who’s really good at retrieving information from the recent past, you may go back to the other side.”
Human limitations
To further validate their results, the researchers created a machine-learning neural network and trained it to complete the task. A machine-learning model trained on this task will track the ball’s path accurately and make the correct prediction every time, unless the researchers impose limitations on its performance.
When the researchers added cognitive limitations similar to those faced by humans, they found that the model altered its strategies. When they eliminated the model’s ability to follow all possible trajectories, it began to employ hierarchical and counterfactual strategies like humans do. If the researchers reduced the model’s memory recall ability, it began to switch to counterfactual only if it thought its recall would be good enough to get the right answer — just as humans do.
“What we found is that networks mimic human behavior when we impose on them those computational constraints that we found in human behavior,” Jazayeri says. “This is really saying that humans are acting rationally under the constraints that they have to function under.”
By slightly varying the amount of memory impairment programmed into the models, the researchers also saw hints that the switching of strategies appears to happen gradually, rather than at a distinct cut-off point. They are now performing further studies to try to determine what is happening in the brain as these shifts in strategy occur.
The research was funded by a Lisa K. Yang ICoN Fellowship, a Friends of the McGovern Institute Student Fellowship, a National Science Foundation Graduate Research Fellowship, the Simons Foundation, the Howard Hughes Medical Institute, and the McGovern Institute.
© Credit: Shutterstock
Menstrual tracking app data is a ‘gold mine’ for advertisers that risks women’s safety – report

Smartphone apps that track menstrual cycles are a ‘gold mine’ for consumer profiling, collecting information on everything from exercise, diet and medication to sexual preferences, hormone levels and contraception use.
This is according to a new report from the University of Cambridge’s Minderoo Centre for Technology and Democracy, which argues that the financial worth of this data is ‘vastly underestimated’ by users who supply profit-driven companies with highly intimate details in a market lacking in regulation.
The report’s authors caution that cycle tracking app (CTA) data in the wrong hands could result in risks to job prospects, workplace monitoring, health insurance discrimination and cyberstalking – and limit access to abortion.
They call for better governance of the booming ‘femtech’ industry to protect users when their data is sold at scale, arguing that apps must provide clear consent options rather than all-or-nothing data collection, and urge public health bodies to launch alternatives to commercial CTAs.
“Menstrual cycle tracking apps are presented as empowering women and addressing the gender health gap,” said Dr Stefanie Felsberger, lead author of the report from Cambridge’s Minderoo Centre. “Yet the business model behind their services rests on commercial use, selling user data and insights to third parties for profit.”
“There are real and frightening privacy and safety risks to women as a result of the commodification of the data collected by cycle tracking app companies.”
As most cycle tracking apps are targeted at women aiming to get pregnant, the download data alone is of huge commercial value, say researchers, as – other than home buying – no life event is linked to such dramatic shifts in consumer behaviour.
In fact, data on pregnancy is believed to be over two hundred times more valuable than data on age, gender or location for targeted advertising. The report points out that period tracking could also be used to target women at different points in their cycle. For example, the oestrogen or ‘mating’ phase could see an increase in cosmetics adverts.
Just the three most popular apps had estimated global download figures of a quarter of a billion in 2024. So-called femtech – digital products focused on women’s health and wellbeing – is estimated to reach over US$60 billion (£45 billion) by 2027, with cycle tracking apps making up half of this market.
With such intense demand for period tracking, the report argues that the UK’s National Health Service (NHS) should develop its own transparent and trustworthy app to rival those from private companies, with apps allowing permission for data to be used in valid medical research.
“The UK is ideally positioned to solve the question of access to menstrual data for researchers, as well as privacy and data commodification concerns, by developing an NHS app to track menstrual cycles,” said Felsberger, who points out that Planned Parenthood in the US already has its own app, but the UK lacks an equivalent.
“Apps that are situated within public healthcare systems, and not driven primarily by profit, will mitigate privacy violations, provide much-needed data on reproductive health, and give people more agency over how their menstrual data is used.”
“The use of cycle tracking apps is at an all-time high,” said Prof Gina Neff, Executive Director of Cambridge’s Minderoo Centre. “Women deserve better than to have their menstrual tracking data treated as consumer data, but there is a different possible future.”
“Researchers could use this data to help answer questions about women’s health. Care providers could use this data for important information about their patients’ health. Women could get meaningful insights that they are searching for,” Neff said.
In the UK and EU, period tracking data is considered ‘special category’, as with that on genetics or ethnicity, and has more legal safeguarding. However, the report highlights how in the UK, apps designed for women's health have been used to charge women for illegally accessing abortion services
In the US, data about menstrual cycles has been collected by officials in an attempt to undermine abortion access. Despite this, data from CTAs are regulated simply as ‘general wellness’ and granted no special protections.
“Menstrual tracking data is being used to control people’s reproductive lives,” said Felsberger. “It should not be left in the hands of private companies.”
Investigations by media, non-profit, and consumer groups have revealed CTAs sharing data with third parties ranging from advertisers and data brokers to tech giants such as Facebook and Google.
The report cites work published earlier this year from Privacy International showing that, while the major CTA companies have updated their approach to data sharing, device information is still collected in the UK and US with “no meaningful consent”.
Despite data protection improvements, the report suggests that user information is still shared with third parties such as cloud-based delivery networks that move the data around, and outside developers contracted to handle app functionalities.
At the very least, commercial apps could include delete buttons, says Felsberger, allowing users to erase data in the app as well as the company servers, helping protect against situations – from legal to medical – where data could be used against them.
“Menstrual tracking in the US should be classed as medical data,” said Felsberger. “In the UK and EU, where this data is already afforded special category status, more focus needs to be placed on enforcing existing regulation.”
The report stresses the need to improve public awareness and digital literacy around period tracking. The researchers argue that schools should educate students on medical data apps and privacy, so young people are less vulnerable to health hoaxes.
The report ‘The High Stakes of Tracking Menstruation’ is authored by Dr Stefanie Felsberger with a foreword by Professor Gina Neff and published by the Minderoo Centre for Technology and Democracy (MCTD).
Cambridge researchers urge public health bodies like the NHS to provide trustworthy, research-driven alternatives to platforms driven by profit.
The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.
University of Melbourne Sustainability Report: Building campus biodiversity
The University of Melbourne will be better able to track, protect and enhance the rich biological diversity of its campuses following completion of a long-running Biodiversity Baseline Data Project.
Once-a-week pill for schizophrenia shows promise in clinical trials
For many patients with schizophrenia, other psychiatric illnesses, or diseases such as hypertension and asthma, it can be difficult to take their medicine every day. To help overcome that challenge, MIT researchers have developed a pill that can be taken just once a week and gradually releases medication from within the stomach.
In a phase 3 clinical trial conducted by MIT spinout Lyndra Therapeutics, the researchers used the once-a-week pill to deliver a widely used medication for managing the symptoms of schizophrenia. They found that this treatment regimen maintained consistent levels of the drug in patients’ bodies and controlled their symptoms just as well as daily doses of the drug. The results are published today in Lancet Psychiatry.
“We’ve converted something that has to be taken once a day to once a week, orally, using a technology that can be adapted for a variety of medications,” says Giovanni Traverso, an associate professor of mechanical engineering at MIT, a gastroenterologist at Brigham and Women’s Hospital, an associate member of the Broad Institute, and an author of the study. “The ability to provide a sustained level of drug for a prolonged period, in an easy-to-administer system, makes it easier to ensure patients are receiving their medication.”
Traverso’s lab began developing the ingestible capsule studied in this trial more than 10 years ago, as part of an ongoing effort to make medications easier for patients to take. The capsule is about the size of a multivitamin, and once swallowed, it expands into a star shape that helps it remain in the stomach until all of the drug is released.
Richard Scranton, chief medical officer of Lyndra Therapeutics, is the senior author of the paper, and Leslie Citrome, a clinical professor of psychiatry and behavioral sciences at New York Medical College School of Medicine, is the lead author. Nayana Nagaraj, medical director at Lyndra Therapeutics, and Todd Dumas, senior director of pharmacometrics at Certara, are also authors.
Sustained delivery
Over the past decade, Traverso’s lab has been working on a variety of capsules that can be swallowed and remain in the digestive tract for days or weeks, slowly releasing their drug payload. In 2016, his team reported the star-shaped device, which was then further developed by Lyndra for clinical trials in patients with schizophrenia.
The device contains six arms that can be folded in, allowing it to fit inside a capsule. The capsule dissolves when the device reaches the stomach, allowing the arms to spring out. Once the arms are extended, the device becomes too large to pass through the pylorus (the exit of the stomach), so it remains freely floating in the stomach as drugs are slowly released from the arms. After about a week, the arms break off on their own, and each segment exits the stomach and passes through the digestive tract.
For the clinical trials, the capsule was loaded with risperidone, a commonly prescribed medication used to treat schizophrenia. Most patients take the drug orally once a day. There are also injectable versions that can be given every two weeks, every month, or every two months, but they require administration by a health care provider and are not always acceptable to patients.
The MIT and Lyndra team chose to focus on schizophrenia in hopes that a drug regimen that could be administered less frequently, through oral delivery, could make treatment easier for patients and their caregivers.
“One of the areas of unmet need that was recognized early on is neuropsychiatric conditions, where the illness can limit or impair one’s ability to remember to take their medication,” Traverso says. “With that in mind, one of the conditions that has been a big focus has been schizophrenia.”
The phase 3 trial was coordinated by researchers at Lyndra and enrolled 83 patients at five different sites around the United States. Forty-five of those patients completed the full five weeks of the study, in which they took one risperidone-loaded capsule per week.
Throughout the study, the researchers measured the amount of drug in each patient’s bloodstream. Each week, they found a sharp increase on the day the pill was given, followed by a slow decline over the next week. The levels were all within the optimal range, and there was less variation over time than is seen when patients take a pill each day.
Effective treatment
Using an evaluation known as the Positive and Negative Syndrome Scale (PANSS), the researchers also found that the patients’ symptoms remained stable throughout the study.
“One of the biggest obstacles in the care of people with chronic illnesses in general is that medications are not taken consistently. This leads to worsening symptoms, and in the case of schizophrenia, potential relapse and hospitalization,” Citrome says. “Having the option to take medication by mouth once a week represents an important option that can assist with adherence for the many patients who would prefer oral medications versus injectable formulations.”
Side effects from the treatment were minimal, the researchers found. Some patients experienced mild acid reflux and constipation early in the study, but these did not last long. The results, showing effectiveness of the capsule and few side effects, represent a major milestone in this approach to drug delivery, Traverso says.
“This really demonstrates that what we had hypothesized a decade ago, which is that a single capsule providing a drug depot within the GI tract could be possible,” he says. “Here what you see is that the capsule can achieve the drug levels that were predicted, and also control symptoms in a sizeable cohort of patients with schizophrenia.”
The investigators now hope to complete larger phase 3 studies before applying for FDA approval of this delivery approach for risperidone. They are also preparing for phase 1 trials using this capsule to deliver other drugs, including contraceptives.
“We are delighted that this technology which started at MIT has reached the point of phase 3 clinical trials,” says Robert Langer, the David H. Koch Institute Professor at MIT, who was an author of the original study on the star capsule and is a co-founder of Lyndra Therapeutics.
The research was funded by Lyndra Therapeutics.
© Credit: Adam Glanzman
‘Who we are and what we stand for’

Surgeon, best-selling author, and public health leader Atul Gawande delivers the Alumni Day keynote address.
Niles Singer/Harvard Staff Photographer
‘Who we are and what we stand for’
Abbie Barrett
Harvard Correspondent
Amid Harvard Alumni Day celebration, speakers address challenges, share messages of strength and resolve
Part of the Commencement 2025 series
A collection of features and profiles covering Harvard University’s 374th Commencement.
Thousands of alumni from around the world gathered on campus Friday for Harvard Alumni Day — an annual event celebrating alumni of all Harvard Schools and class years and the collective strength of their communities. The day’s events, which coincided with Harvard and Radcliffe College reunions and other alumni programs across the University, drew a record 9,600-plus attendees this year. The festivities included musical performances, the presentation of the Harvard Medals, and a keynote address by renowned surgeon, best-selling author, and public health leader Atul Gawande, M.D. ’95, M.P.H. ’99.
The main program began with the traditional alumni parade from the Old Yard to Tercentenary Theatre, led by the chief marshal of alumni Dara Olmsted Silverstein ’00 and the two oldest alumni in attendance, Linda Cabot Black ’51 and Stanley Karson ’48, A.M. ’50.

Alumni file from the Old Yard to Tercentenary Theatre.
Veasey Conway/Harvard Staff Photographer

Stanley Karson ’48, A.M. ’50.
Veasey Conway/Harvard Staff Photographer

President Alan Garber greets Linda Cabot Black ’51.
Niles Singer/Harvard Staff Photographer
After Peter J. Koutoujian, M.P.A. ’03, the sheriff of Middlesex County, called the 155th annual meeting of the Harvard Alumni Association to order, HAA board president Moitri Chowdhury Savard ’93 took to the podium, referencing an appeal she made to the College Class of 2024 last spring: to consider the plural of the University’s motto, Veritas, and embrace Veritates— the ability to hold many truths simultaneously to connect across differences.
“Today I am even more convinced that we must strengthen this muscle to hold multiple truths and to coalesce around our many shared values, particularly freedom of thought and expression, and respect and kindness,” said Savard, who will be succeeded by incoming HAA President William Makris, Ed.M. ’00, on July 1.
Noting the unprecedented challenges the University has faced over the past year, she urged fellow alumni to continue “to be informed, principled ambassadors” of Harvard and higher education more broadly.

Sarah Karmon, executive director of the HAA and associate vice president of alumni affairs and development, spoke next, expressing her gratitude for the steadfast support and contributions of Harvard’s alumni volunteers. She also gave special thanks to those who led reunion planning and fundraising efforts for their classes this year, noting the Class of 2005’s record-setting attendance for a 20th reunion.
Karmon closed by paying tribute to Jack Reardon ’60, associate vice president of University relations, who will retire at the end of the month after more than 60 years of service to Harvard. “Every person in this theater today has benefited from his leadership, his wisdom, and his deep commitment to his alma mater,” she said.

“The pursuit of truth — of Veritas — is perpetual,” Garber said. “We are unceasing in our efforts to champion our motto.”
Niles Singer/Harvard Staff Photographer
President Alan M. Garber ’76, Ph.D. ’82, who was met with a standing ovation, spoke to the challenges of a difficult year, laying out how the University is working to address legitimate criticisms while defending itself against misrepresentations and retaliatory actions from the federal government.
“Only one thing about Harvard has persisted over 388 years, and actually it’s not our name; it’s our embrace of scrutiny, advancement, and renewal,” said Garber, noting that the University is built on the idea of continual improvement to create a better institution and world for successive generations. “The pursuit of truth — of Veritas — is perpetual,” Garber said. “We are unceasing in our efforts to champion our motto.”
He also remarked on the expressions of support the University has received from alumni, as well as from people with no affiliation to Harvard who have championed the University in its fight to preserve academic freedom.
Garber ended his speech with a short valediction: “May Veritas lift us up and light our way, especially in dark times, enabling Harvard and our fellow universities to persevere and succeed in building a better future — not perfect, but more perfect than the present.”

Danilo “Dacha” Thurber ’25 and Sava Thurber ’27.
Niles Singer/Harvard Staff Photographer

Members of the 50th reunion committee.
Veasey Conway/Harvard Staff Photographer

Peter J. Koutoujian, M.P.A. ’03.
Veasey Conway/Harvard Staff Photographer

Chief marshal of alumni Dara Olmsted Silverstein ’00.
Niles Singer/Harvard Staff Photographer

Outgoing HAA board president Moitri Chowdhury Savard ’93.
Niles Singer/Harvard Staff Photographer

Paul J. Finnegan ’75, M.B.A. ’82.
Veasey Conway/Harvard Staff Photographer

Carolyn Hughes ’54.
Niles Singer/Harvard Staff Photographer

Kathy Delaney-Smith.
Veasey Conway/Harvard Staff Photographer

David Johnston ’63.
Veasey Conway/Harvard Staff Photographer

Sarah Karmon, HAA executive director and associate vice president of alumni affairs and development.
Niles Singer/Harvard Staff Photographer
Following Garber’s speech, brothers Danilo “Dacha” Thurber ’25 and Sava Thurber ’27 performed two songs — a traditional Polish folksong called “Tesknota Za Ojczyzna Marsz” and “Etudes-Caprices Op. 18, No. 4” by Polish composer Henryk Wieniawski — which they noted “highlight the importance of an international voice in a place which we are so fortunate to call home.”
Garber then presented this year’s Harvard Medals to Kathy Delaney-Smith, Paul J. Finnegan ’75, M.B.A. ’82, Carolyn Hughes ’54, and David Johnston ’63, who were recognized for their extraordinary service to the University.

In his keynote address, Gawande, who served as assistant administrator for global health at USAID from 2022 to early 2025, called out recent federal actions for undermining public health and harming Harvard and the country.
The University is facing existential questions, said Gawande, a general and endocrine surgeon at Brigham and Women’s Hospital and a professor at Harvard Medical School and Harvard T.H. Chan School of Public Health. He learned just in the previous week that funding had been cut for his own research center’s efforts to reduce surgical patient mortality.
“The discussions have been hard, but the answer was ultimately easy,” he said, expressing his gratitude to Garber and the Corporation for standing strong against demands that threaten the foundation of teaching, scholarship, and discovery. Navigating an uncertain future, he said, “is far easier when we know who we are and what we stand for.”
The main program ended with a performance of “Fair Harvard” by alumni members of the Harvard Din & Tonics, Harvard Glee Club, Harvard-Radcliffe Collegium Musicum, Harvard University Choir, Kuumba Singers of Harvard College, Radcliffe Choral Society, and Radcliffe Pitches. Savard told those in attendance to save the date for next year’s Harvard Alumni Day — June 5, 2026 — before the crowd dispersed to celebrate in the Yard with lawn games, photo opportunities, and food and beverage trucks.
Sixteen Harvard Clubs around the world also hosted local celebrations of Harvard Alumni Day for those who could not attend in person. Later in the afternoon, many Shared Interest Groups hosted meetup events on campus and in Cambridge, including a get-together at Charlie’s Kitchen hosted by Harvardwood. Alumni also had the opportunity to attend several Harvard Alumni Day symposia sessions, which included faculty panels on Harvard’s global impact, the ongoing work of the Salata Institute for Climate and Sustainability, and the Harvard Data Science Initiative’s efforts to ensure AI serves society in meaningful and ethical ways.
Recovering from the past and transitioning to a better energy future
As the frequency and severity of extreme weather events grow, it may become increasingly necessary to employ a bolder approach to climate change, warned Emily A. Carter, the Gerhard R. Andlinger Professor in Energy and the Environment at Princeton University. Carter made her case for why the energy transition is no longer enough in the face of climate change while speaking at the MIT Energy Initiative (MITEI) Presents: Advancing the Energy Transition seminar on the MIT campus.
“If all we do is take care of what we did in the past — but we don’t change what we do in the future — then we’re still going to be left with very serious problems,” she said. Our approach to climate change mitigation must comprise transformation, intervention, and adaption strategies, said Carter.
Transitioning to a decarbonized electricity system is one piece of the puzzle. Growing amounts of solar and wind energy — along with nuclear, hydropower, and geothermal — are slowly transforming the energy electricity landscape, but Carter noted that there are new technologies farther down the pipeline.
“Advanced geothermal may come on in the next couple of decades. Fusion will only really start to play a role later in the century, but could provide firm electricity such that we can start to decommission nuclear,” said Carter, who is also a senior strategic advisor and associate laboratory director at the Department of Energy’s Princeton Plasma Physics Laboratory.
Taking this a step further, Carter outlined how this carbon-free electricity should then be used to electrify everything we can. She highlighted the industrial sector as a critical area for transformation: “The energy transition is about transitioning off of fossil fuels. If you look at the manufacturing industries, they are driven by fossil fuels right now. They are driven by fossil fuel-driven thermal processes.” Carter noted that thermal energy is much less efficient than electricity and highlighted electricity-driven strategies that could replace heat in manufacturing, such as electrolysis, plasmas, light-emitting diodes (LEDs) for photocatalysis, and joule heating.
The transportation sector is also a key area for electrification, Carter said. While electric vehicles have become increasingly common in recent years, heavy-duty transportation is not as easily electrified. The solution? “Carbon-neutral fuels for heavy-duty aviation and shipping,” she said, emphasizing that these fuels will need to become part of the circular economy. “We know that when we burn those fuels, they’re going to produce CO2 [carbon dioxide] again. They need to come from a source of CO2 that is not fossil-based.”
The next step is intervention in the form of carbon dioxide removal, which then necessitates methods of storage and utilization, according to Carter. “There’s a lot of talk about building large numbers of pipelines to capture the CO2 — from fossil fuel-driven power plants, cement plants, steel plants, all sorts of industrial places that emit CO2 — and then piping it and storing it in underground aquifers,” she explained. Offshore pipelines are much more expensive than those on land, but can mitigate public concerns over their safety. Europe is exclusively focusing their efforts offshore for this very reason, and the same could be true for the United States, Carter said.
Once carbon dioxide is captured, commercial utilization may provide economic leverage to accelerate sequestration, even if only a few gigatons are used per year, Carter noted. Through mineralization, CO2 can be converted into carbonates, which could be used in building materials such as concrete and road-paving materials.
There is another form of intervention that Carter currently views as a last resort: solar geoengineering, sometimes known as solar radiation management or SRM. In 1991, Mount Pinatubo in the Philippines erupted and released sulfur dioxide into the stratosphere, which caused a temporary cooling of the Earth by approximately 0.5 degree Celsius for over a year. SRM seeks to recreate that cooling effect by injecting particles into the atmosphere that reflect sunlight. According to Carter, there are three main strategies: stratospheric aerosol injection, cirrus cloud thinning (thinning clouds to let more infrared radiation emitted by the earth escape to space), and marine cloud brightening (brightening clouds with sea salt so they reflect more light).
“My view is, I hope we don't ever have to do it, but I sure think we should understand what would happen in case somebody else just decides to do it. It’s a global security issue,” said Carter. “In principle, it’s not so difficult technologically, so we’d like to really understand and to be able to predict what would happen if that happened.”
With any technology, stakeholder and community engagement is essential for deployment, Carter said. She emphasized the importance of both respectfully listening to concerns and thoroughly addressing them, stating, “Hopefully, there’s enough information given to assuage their fears. We have to gain the trust of people before any deployment can be considered.”
A crucial component of this trust starts with the responsibility of the scientific community to be transparent and critique each other’s work, Carter said. “Skepticism is good. You should have to prove your proof of principle.”
MITEI Presents: Advancing the Energy Transition is an MIT Energy Initiative speaker series highlighting energy experts and leaders at the forefront of the scientific, technological, and policy solutions needed to transform our energy systems. The series will continue in fall 2025. For more information on this and additional events, visit the MITEI website.
© Photo: Kelley Travers
Inroads to personalized AI trip planning
Travel agents help to provide end-to-end logistics — like transportation, accommodations, meals, and lodging — for businesspeople, vacationers, and everyone in between. For those looking to make their own arrangements, large language models (LLMs) seem like they would be a strong tool to employ for this task because of their ability to iteratively interact using natural language, provide some commonsense reasoning, collect information, and call other tools in to help with the task at hand. However, recent work has found that state-of-the-art LLMs struggle with complex logistical and mathematical reasoning, as well as problems with multiple constraints, like trip planning, where they’ve been found to provide viable solutions 4 percent or less of the time, even with additional tools and application programming interfaces (APIs).
Subsequently, a research team from MIT and the MIT-IBM Watson AI Lab reframed the issue to see if they could increase the success rate of LLM solutions for complex problems. “We believe a lot of these planning problems are naturally a combinatorial optimization problem,” where you need to satisfy several constraints in a certifiable way, says Chuchu Fan, associate professor in the MIT Department of Aeronautics and Astronautics (AeroAstro) and the Laboratory for Information and Decision Systems (LIDS). She is also a researcher in the MIT-IBM Watson AI Lab. Her team applies machine learning, control theory, and formal methods to develop safe and verifiable control systems for robotics, autonomous systems, controllers, and human-machine interactions.
Noting the transferable nature of their work for travel planning, the group sought to create a user-friendly framework that can act as an AI travel broker to help develop realistic, logical, and complete travel plans. To achieve this, the researchers combined common LLMs with algorithms and a complete satisfiability solver. Solvers are mathematical tools that rigorously check if criteria can be met and how, but they require complex computer programming for use. This makes them natural companions to LLMs for problems like these, where users want help planning in a timely manner, without the need for programming knowledge or research into travel options. Further, if a user’s constraint cannot be met, the new technique can identify and articulate where the issue lies and propose alternative measures to the user, who can then choose to accept, reject, or modify them until a valid plan is formulated, if one exists.
“Different complexities of travel planning are something everyone will have to deal with at some point. There are different needs, requirements, constraints, and real-world information that you can collect,” says Fan. “Our idea is not to ask LLMs to propose a travel plan. Instead, an LLM here is acting as a translator to translate this natural language description of the problem into a problem that a solver can handle [and then provide that to the user],” says Fan.
Co-authoring a paper on the work with Fan are Yang Zhang of MIT-IBM Watson AI Lab, AeroAstro graduate student Yilun Hao, and graduate student Yongchao Chen of MIT LIDS and Harvard University. This work was recently presented at the Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics.
Breaking down the solver
Math tends to be domain-specific. For example, in natural language processing, LLMs perform regressions to predict the next token, a.k.a. “word,” in a series to analyze or create a document. This works well for generalizing diverse human inputs. LLMs alone, however, wouldn’t work for formal verification applications, like in aerospace or cybersecurity, where circuit connections and constraint tasks need to be complete and proven, otherwise loopholes and vulnerabilities can sneak by and cause critical safety issues. Here, solvers excel, but they need fixed formatting inputs and struggle with unsatisfiable queries. A hybrid technique, however, provides an opportunity to develop solutions for complex problems, like trip planning, in a way that’s intuitive for everyday people.
“The solver is really the key here, because when we develop these algorithms, we know exactly how the problem is being solved as an optimization problem,” says Fan. Specifically, the research group used a solver called satisfiability modulo theories (SMT), which determines whether a formula can be satisfied. “With this particular solver, it’s not just doing optimization. It’s doing reasoning over a lot of different algorithms there to understand whether the planning problem is possible or not to solve. That’s a pretty significant thing in travel planning. It’s not a very traditional mathematical optimization problem because people come up with all these limitations, constraints, restrictions,” notes Fan.
Translation in action
The “travel agent” works in four steps that can be repeated, as needed. The researchers used GPT-4, Claude-3, or Mistral-Large as the method’s LLM. First, the LLM parses a user’s requested travel plan prompt into planning steps, noting preferences for budget, hotels, transportation, destinations, attractions, restaurants, and trip duration in days, as well as any other user prescriptions. Those steps are then converted into executable Python code (with a natural language annotation for each of the constraints), which calls APIs like CitySearch, FlightSearch, etc. to collect data, and the SMT solver to begin executing the steps laid out in the constraint satisfaction problem. If a sound and complete solution can be found, the solver outputs the result to the LLM, which then provides a coherent itinerary to the user.
If one or more constraints cannot be met, the framework begins looking for an alternative. The solver outputs code identifying the conflicting constraints (with its corresponding annotation) that the LLM then provides to the user with a potential remedy. The user can then decide how to proceed, until a solution (or the maximum number of iterations) is reached.
Generalizable and robust planning
The researchers tested their method using the aforementioned LLMs against other baselines: GPT-4 by itself, OpenAI o1-preview by itself, GPT-4 with a tool to collect information, and a search algorithm that optimizes for total cost. Using the TravelPlanner dataset, which includes data for viable plans, the team looked at multiple performance metrics: how frequently a method could deliver a solution, if the solution satisfied commonsense criteria like not visiting two cities in one day, the method’s ability to meet one or more constraints, and a final pass rate indicating that it could meet all constraints. The new technique generally achieved over a 90 percent pass rate, compared to 10 percent or lower for the baselines. The team also explored the addition of a JSON representation within the query step, which further made it easier for the method to provide solutions with 84.4-98.9 percent pass rates.
The MIT-IBM team posed additional challenges for their method. They looked at how important each component of their solution was — such as removing human feedback or the solver — and how that affected plan adjustments to unsatisfiable queries within 10 or 20 iterations using a new dataset they created called UnsatChristmas, which includes unseen constraints, and a modified version of TravelPlanner. On average, the MIT-IBM group’s framework achieved 78.6 and 85 percent success, which rises to 81.6 and 91.7 percent with additional plan modification rounds. The researchers analyzed how well it handled new, unseen constraints and paraphrased query-step and step-code prompts. In both cases, it performed very well, especially with an 86.7 percent pass rate for the paraphrasing trial.
Lastly, the MIT-IBM researchers applied their framework to other domains with tasks like block picking, task allocation, the traveling salesman problem, and warehouse. Here, the method must select numbered, colored blocks and maximize its score; optimize robot task assignment for different scenarios; plan trips minimizing distance traveled; and robot task completion and optimization.
“I think this is a very strong and innovative framework that can save a lot of time for humans, and also, it’s a very novel combination of the LLM and the solver,” says Hao.
This work was funded, in part, by the Office of Naval Research and the MIT-IBM Watson AI Lab.
© Photo: AdobeStock
Inroads to personalized AI trip planning
Travel agents help to provide end-to-end logistics — like transportation, accommodations, meals, and lodging — for businesspeople, vacationers, and everyone in between. For those looking to make their own arrangements, large language models (LLMs) seem like they would be a strong tool to employ for this task because of their ability to iteratively interact using natural language, provide some commonsense reasoning, collect information, and call other tools in to help with the task at hand. However, recent work has found that state-of-the-art LLMs struggle with complex logistical and mathematical reasoning, as well as problems with multiple constraints, like trip planning, where they’ve been found to provide viable solutions 4 percent or less of the time, even with additional tools and application programming interfaces (APIs).
Subsequently, a research team from MIT and the MIT-IBM Watson AI Lab reframed the issue to see if they could increase the success rate of LLM solutions for complex problems. “We believe a lot of these planning problems are naturally a combinatorial optimization problem,” where you need to satisfy several constraints in a certifiable way, says Chuchu Fan, associate professor in the MIT Department of Aeronautics and Astronautics (AeroAstro) and the Laboratory for Information and Decision Systems (LIDS). She is also a researcher in the MIT-IBM Watson AI Lab. Her team applies machine learning, control theory, and formal methods to develop safe and verifiable control systems for robotics, autonomous systems, controllers, and human-machine interactions.
Noting the transferable nature of their work for travel planning, the group sought to create a user-friendly framework that can act as an AI travel broker to help develop realistic, logical, and complete travel plans. To achieve this, the researchers combined common LLMs with algorithms and a complete satisfiability solver. Solvers are mathematical tools that rigorously check if criteria can be met and how, but they require complex computer programming for use. This makes them natural companions to LLMs for problems like these, where users want help planning in a timely manner, without the need for programming knowledge or research into travel options. Further, if a user’s constraint cannot be met, the new technique can identify and articulate where the issue lies and propose alternative measures to the user, who can then choose to accept, reject, or modify them until a valid plan is formulated, if one exists.
“Different complexities of travel planning are something everyone will have to deal with at some point. There are different needs, requirements, constraints, and real-world information that you can collect,” says Fan. “Our idea is not to ask LLMs to propose a travel plan. Instead, an LLM here is acting as a translator to translate this natural language description of the problem into a problem that a solver can handle [and then provide that to the user],” says Fan.
Co-authoring a paper on the work with Fan are Yang Zhang of MIT-IBM Watson AI Lab, AeroAstro graduate student Yilun Hao, and graduate student Yongchao Chen of MIT LIDS and Harvard University. This work was recently presented at the Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics.
Breaking down the solver
Math tends to be domain-specific. For example, in natural language processing, LLMs perform regressions to predict the next token, a.k.a. “word,” in a series to analyze or create a document. This works well for generalizing diverse human inputs. LLMs alone, however, wouldn’t work for formal verification applications, like in aerospace or cybersecurity, where circuit connections and constraint tasks need to be complete and proven, otherwise loopholes and vulnerabilities can sneak by and cause critical safety issues. Here, solvers excel, but they need fixed formatting inputs and struggle with unsatisfiable queries. A hybrid technique, however, provides an opportunity to develop solutions for complex problems, like trip planning, in a way that’s intuitive for everyday people.
“The solver is really the key here, because when we develop these algorithms, we know exactly how the problem is being solved as an optimization problem,” says Fan. Specifically, the research group used a solver called satisfiability modulo theories (SMT), which determines whether a formula can be satisfied. “With this particular solver, it’s not just doing optimization. It’s doing reasoning over a lot of different algorithms there to understand whether the planning problem is possible or not to solve. That’s a pretty significant thing in travel planning. It’s not a very traditional mathematical optimization problem because people come up with all these limitations, constraints, restrictions,” notes Fan.
Translation in action
The “travel agent” works in four steps that can be repeated, as needed. The researchers used GPT-4, Claude-3, or Mistral-Large as the method’s LLM. First, the LLM parses a user’s requested travel plan prompt into planning steps, noting preferences for budget, hotels, transportation, destinations, attractions, restaurants, and trip duration in days, as well as any other user prescriptions. Those steps are then converted into executable Python code (with a natural language annotation for each of the constraints), which calls APIs like CitySearch, FlightSearch, etc. to collect data, and the SMT solver to begin executing the steps laid out in the constraint satisfaction problem. If a sound and complete solution can be found, the solver outputs the result to the LLM, which then provides a coherent itinerary to the user.
If one or more constraints cannot be met, the framework begins looking for an alternative. The solver outputs code identifying the conflicting constraints (with its corresponding annotation) that the LLM then provides to the user with a potential remedy. The user can then decide how to proceed, until a solution (or the maximum number of iterations) is reached.
Generalizable and robust planning
The researchers tested their method using the aforementioned LLMs against other baselines: GPT-4 by itself, OpenAI o1-preview by itself, GPT-4 with a tool to collect information, and a search algorithm that optimizes for total cost. Using the TravelPlanner dataset, which includes data for viable plans, the team looked at multiple performance metrics: how frequently a method could deliver a solution, if the solution satisfied commonsense criteria like not visiting two cities in one day, the method’s ability to meet one or more constraints, and a final pass rate indicating that it could meet all constraints. The new technique generally achieved over a 90 percent pass rate, compared to 10 percent or lower for the baselines. The team also explored the addition of a JSON representation within the query step, which further made it easier for the method to provide solutions with 84.4-98.9 percent pass rates.
The MIT-IBM team posed additional challenges for their method. They looked at how important each component of their solution was — such as removing human feedback or the solver — and how that affected plan adjustments to unsatisfiable queries within 10 or 20 iterations using a new dataset they created called UnsatChristmas, which includes unseen constraints, and a modified version of TravelPlanner. On average, the MIT-IBM group’s framework achieved 78.6 and 85 percent success, which rises to 81.6 and 91.7 percent with additional plan modification rounds. The researchers analyzed how well it handled new, unseen constraints and paraphrased query-step and step-code prompts. In both cases, it performed very well, especially with an 86.7 percent pass rate for the paraphrasing trial.
Lastly, the MIT-IBM researchers applied their framework to other domains with tasks like block picking, task allocation, the traveling salesman problem, and warehouse. Here, the method must select numbered, colored blocks and maximize its score; optimize robot task assignment for different scenarios; plan trips minimizing distance traveled; and robot task completion and optimization.
“I think this is a very strong and innovative framework that can save a lot of time for humans, and also, it’s a very novel combination of the LLM and the solver,” says Hao.
This work was funded, in part, by the Office of Naval Research and the MIT-IBM Watson AI Lab.
© Photo: AdobeStock
Still waiting

Enrico Fermi.
Photo illustration by Liz Zonarich/Harvard Staff
Still waiting
Sy Boles
Harvard Staff Writer
75 years after Fermi’s paradox, are we any closer to finding alien life?
It was a simple question asked over lunch in 1950. Enrico Fermi, the Nobel Prize-winning physicist who helped usher in the atomic age, was dining with colleagues at Los Alamos, New Mexico, when the conversation turned to extraterrestrial life. Given the vastness of the universe and the statistical likelihood of other intelligent civilizations, Fermi wondered, “Where is everybody?”
Seventy-five years later, David Charbonneau, a professor of astronomy at the Center for Astrophysics | Harvard & Smithsonian, says we’re closer to an answer.
When Fermi posed his famous paradox, Charbonneau said, we hadn’t identified a single planet beyond our solar system. The 1995 discovery of the first exoplanet allowed scientists to break the paradox into smaller, more solvable questions: How many stars are there? How many of those stars have planets? What fraction of those planets are Earth-like? What fraction of Earth-like planets support life? And finally, what fraction of that life is intelligent?
“We have made tremendous progress on those questions,” said Charbonneau, who co-chaired the National Academies of Sciences, Engineering, and Medicine’s 2018 Committee on Exoplanet Science Strategy. “We now know that one in every four stars, at least, has a planet that is the same size as the Earth and is rocky, and is the same temperature as the Earth, so it’s what we would call a habitable-zone planet. Those are very secure conclusions.”
The next step is identifying biosignatures — chemicals in a planet’s atmosphere that could only be there because of biological processes. Charbonneau says that the necessary evidence faces a major technological hurdle: It requires far more data than our current instruments can provide.
Recognizing that challenge, the National Academies’ Committee for a Decadal Survey on Astronomy and Astrophysics 2020, on which Charbonneau served as a panel member, recommended the development of the Habitable Worlds Observatory, a space telescope designed to hunt for chemical signs of life on other planets. The HWO, if it were built and launched, would image at least 25 potentially habitable worlds. The project remains tentative.
There’s still the question of just how common life, let alone intelligent life, really is. It’s possible, Charbonneau said, that if you take any habitable-zone planet, add water, oxygen, nitrogen, and phosphorus, and give it about a billion years, life will develop. Or you could have those very same conditions, and it would all remain stubbornly lifeless. You only have to look at the first habitable planet to have a much better idea how common life is.
“If you look at the first one and there isn’t life, you’ve already learned, from a statistical perspective, that it’s not a guarantee that life forms. And then you have to think logarithmically. You have to think, maybe it’s one in 1,000 or maybe it’s one in a billion, or maybe it’s one in a trillion. And all those possibilities basically would mean there’s no life that we can interact with.”
Avi Loeb, Frank B. Baird Jr. Professor of Science at Harvard, says the search for extraterrestrial life should expand beyond traditional approaches. Loeb is the founder of the Galileo Project, which studies both unidentified aerial phenomena spotted here on Earth and physical objects that may have come from other solar systems.
The project is named for the Italian astronomer who was persecuted in the 17th century for arguing the Copernican theory that the Earth was not the center of the universe. Proof of billions of habitable planets in our galaxy alone is a reminder that we’re not as unique as we think we are, Loeb says. “The message from nature is, don’t be presumptuous, you are not privileged.”

Avi Loeb.
Harvard file photo

David Charbonneau.
Niles Singer/Harvard Staff Photographer
Loeb made headlines in 2018 when he suggested that ‘Oumuamua, the first known interstellar object to pass through our solar system, could be an alien lightsail or debris from an extraterrestrial ship. Despite pushback against the idea, Loeb says we shouldn’t brush anomalies under the carpet: We should at least get the data to find out for certain. He thinks that Fermi was doing himself a disservice by wondering idly about whether there were aliens, like someone who complains of being lonely but won’t try to meet new people.
“It’s the most romantic question on Earth,” Loeb said. “Do we have a partner out there?”
For Charbonneau, the chances of finding that partner are slim. Even under ideal circumstances — if our nearest interstellar neighbor, Proxima Centauri, hosted intelligent life with radio technology — sending a single message back and forth once would take the better part of a decade.
There’s also the chance that the aliens are less interested in us than we are in them.
“If you look around on the Earth, there are a lot of organisms, some would say intelligent organisms, that are not interested in developing technology, and they’re also maybe not interested in communicating,” Charbonneau said. “We humans love to communicate, and we love to connect, and maybe that’s just not a property of life: Maybe that’s really a property of humans.”
Melding data, systems, and society
Research that crosses the traditional boundaries of academic disciplines, and boundaries between academia, industry, and government, is increasingly widespread, and has sometimes led to the spawning of significant new disciplines. But Munther Dahleh, a professor of electrical engineering and computer science at MIT, says that such multidisciplinary and interdisciplinary work often suffers from a number of shortcomings and handicaps compared to more traditionally focused disciplinary work.
But increasingly, he says, the profound challenges that face us in the modern world — including climate change, biodiversity loss, how to control and regulate artificial intelligence systems, and the identification and control of pandemics — require such meshing of expertise from very different areas, including engineering, policy, economics, and data analysis. That realization is what guided him, a decade ago, in the creation of MIT’s pioneering Institute for Data, Systems and Society (IDSS), aiming to foster a more deeply integrated and lasting set of collaborations than the usual temporary and ad hoc associations that occur for such work.
Dahleh has now written a book detailing the process of analyzing the landscape of existing disciplinary divisions at MIT and conceiving of a way to create a structure aimed at breaking down some of those barriers in a lasting and meaningful way, in order to bring about this new institute. The book, “Data, Systems, and Society: Harnessing AI for Societal Good,” was published this March by Cambridge University Press.
The book, Dahleh says, is his attempt “to describe our thinking that led us to the vision of the institute. What was the driving vision behind it?” It is aimed at a number of different audiences, he says, but in particular, “I’m targeting students who are coming to do research that they want to address societal challenges of different types, but utilizing AI and data science. How should they be thinking about these problems?”
A key concept that has guided the structure of the institute is something he refers to as “the triangle.” This refers to the interaction of three components: physical systems, people interacting with those physical systems, and then regulation and policy regarding those systems. Each of these affects, and is affected by, the others in various ways, he explains. “You get a complex interaction among these three components, and then there is data on all these pieces. Data is sort of like a circle that sits in the middle of this triangle and connects all these pieces,” he says.
When tackling any big, complex problem, he suggests, it is useful to think in terms of this triangle. “If you’re tackling a societal problem, it’s very important to understand the impact of your solution on society, on the people, and the role of people in the success of your system,” he says. Often, he says, “solutions and technology have actually marginalized certain groups of people and have ignored them. So the big message is always to think about the interaction between these components as you think about how to solve problems.”
As a specific example, he cites the Covid-19 pandemic. That was a perfect example of a big societal problem, he says, and illustrates the three sides of the triangle: there’s the biology, which was little understood at first and was subject to intensive research efforts; there was the contagion effect, having to do with social behavior and interactions among people; and there was the decision-making by political leaders and institutions, in terms of shutting down schools and companies or requiring masks, and so on. “The complex problem we faced was the interaction of all these components happening in real-time, when the data wasn’t all available,” he says.
Making a decision, for example shutting schools or businesses, based on controlling the spread of the disease, had immediate effects on economics and social well-being and health and education, “so we had to weigh all these things back into the formula,” he says. “The triangle came alive for us during the pandemic.” As a result, IDSS “became a convening place, partly because of all the different aspects of the problem that we were interested in.”
Examples of such interactions abound, he says. Social media and e-commerce platforms are another case of “systems built for people, and they have a regulation aspect, and they fit into the same story if you’re trying to understand misinformation or the monitoring of misinformation.”
The book presents many examples of ethical issues in AI, stressing that they must be handled with great care. He cites self-driving cars as an example, where programming decisions in dangerous situations can appear ethical but lead to negative economic and humanitarian outcomes. For instance, while most Americans support the idea that a car should sacrifice its driver rather than kill an innocent person, they wouldn’t buy such a car. This reluctance lowers adoption rates and ultimately increases casualties.
In the book, he explains the difference, as he sees it, between the concept of “transdisciplinary” versus typical cross-disciplinary or interdisciplinary research. “They all have different roles, and they have been successful in different ways,” he says. The key is that most such efforts tend to be transitory, and that can limit their societal impact. The fact is that even if people from different departments work together on projects, they lack a structure of shared journals, conferences, common spaces and infrastructure, and a sense of community. Creating an academic entity in the form of IDSS that explicitly crosses these boundaries in a fixed and lasting way was an attempt to address that lack. “It was primarily about creating a culture for people to think about all these components at the same time.”
He hastens to add that of course such interactions were already happening at MIT, “but we didn’t have one place where all the students are all interacting with all of these principles at the same time.” In the IDSS doctoral program, for instance, there are 12 required core courses — half of them from statistics and optimization theory and computation, and half from the social sciences and humanities.
Dahleh stepped down from the leadership of IDSS two years ago to return to teaching and to continue his research. But as he reflected on the work of that institute and his role in bringing it into being, he realized that unlike his own academic research, in which every step along the way is carefully documented in published papers, “I haven’t left a trail” to document the creation of the institute and the thinking behind it. “Nobody knows what we thought about, how we thought about it, how we built it.” Now, with this book, they do.
The book, he says, is “kind of leading people into how all of this came together, in hindsight. I want to have people read this and sort of understand it from a historical perspective, how something like this happened, and I did my best to make it as understandable and simple as I could.”
© Image courtesy of Munther Dahleh.
What your brain score says about your body
What your brain score says about your body

Mass General Brigham Communications
Simple tool can be used to identify risk factors for cancer and heart disease too, says new study
A “scorecard” designed to assess a person’s risk of developing brain-related conditions works similarly for heart disease and the three most common types of cancer, according to a new Mass General Brigham study published in Family Practice.
The McCance Brain Care Score, developed at Mass General Brigham, is a list designed to assess modifiable risk factors that influence brain health. The scorecard also serves as a practical framework to help individuals identify meaningful, achievable lifestyle changes that support brain — and possibly systemic — health. Previous studies showed that a higher score, indicating better brain care, associates with a lower risk of stroke, dementia, and late-life depression.
“While the McCance Brain Care Score was originally developed to address modifiable risk factors for brain diseases, we have also found it’s associated with the incidence of cardiovascular disease and common cancers,” said senior author Sanjula Singh of the McCance Center for Brain Health at Massachusetts General Hospital and Harvard Medical School. “These findings reinforce the idea that brain disease, heart disease, and cancer share common risk factors and that by taking better care of your brain, you may also be supporting the health of your heart and body as a whole simultaneously.”
“These findings reinforce the idea that brain disease, heart disease, and cancer share common risk factors.”
Neurological diseases such as stroke, dementia, and late-life depression are often driven by a combination of modifiable risk factors. Similarly, cardiovascular diseases — including ischemic heart disease, stroke, and heart failure — and the three most common cancers worldwide (lung, colorectal, and breast cancer) share many of these risk factors. At least 80 percent of cardiovascular disease cases and 50 percent of cancer cases are attributable to modifiable behaviors such as poor nutrition, physical inactivity, smoking, excessive alcohol use, elevated blood pressure, cholesterol, and blood sugar, as well as psychosocial factors like stress and social isolation.
Given this overlap, researchers used data from the UK Biobank to analyze health outcomes in 416,370 individuals aged 40 to 69 years. They found that a 5-point higher Brain Care Score at baseline was associated with a 43 percent lower risk of developing cardiovascular disease over a median follow-up of 12½ years. For cancer, a 5-point increase in Brain Care Score was associated with a 31 percent lower incidence of lung, colorectal, and breast cancer.
The authors acknowledged several limitations. First, while the findings reveal strong associations, the study does not establish causality — although prior evidence suggests that some individual components of the Brain Care Score, such as smoking, physical activity, and blood pressure control, have causal links to specific outcomes. Second, because the UK Biobank includes only participants aged 40 to 69 at enrollment, the findings may not generalize to younger or older populations. Lastly, while the score provides a broad, accessible measure of brain health, it is not designed as a disease-specific predictive model.
“The goal of the McCance Brain Care Score is to empower individuals to take small, meaningful steps toward better brain health,” said lead author Jasper Senff, who conducted this work as a postdoctoral fellow in the Singh Lab within the Brain Care Labs at Massachusetts General Hospital. “Taking better care of your brain by making progress on your Brain Care Score may also be linked to broader health benefits, including a lower likelihood of heart disease and cancer.”
“Primary care providers around the world are under growing pressure to manage complex health needs within limited time,” said Singh. “A simple, easy-to-use tool like the McCance Brain Care Score holds enormous promise — not only for supporting brain health, but also for helping to address modifiable risk factors for a broader range of chronic diseases in a practical, time-efficient way.”
Funding for this study was provided by the National Institutes of Health and American Heart Association.
Numbers tell one story about climate change. People tell another.

Dustin Tingley.
Photo by Grace DuVal
Numbers tell one story about climate change. People tell another.
Alvin Powell
Harvard Staff Writer
Policy expert Dustin Tingley studies transition to renewable energy, knows from work, life how economic shifts rattle through communities
Over the last decade, Dustin Tingley has reconsidered his beliefs about expertise.
As a public policy expert, Tingley has devised quantitative ways to understand the messy problems and sometimes messier datasets that abound in political economy, international trade, and political science. In recent years, he has turned his attention to the transition to renewable energy amid the quickening pace of climate change.
As he has done so, Tingley has found himself shifting focus from datasets that tell a story in numbers to stories told by people experiencing changing economic circumstances and climate-stressed times.
“I came to the realization that there was so much expertise about this topic that was not in academia,” said Tingley, the Thomas D. Cabot Professor of Public Policy at the Harvard Kennedy School and professor of government in the Faculty of Arts and Sciences. “You go where the knowledge is, and the knowledge is in the field. The knowledge is in the lived experience of communities and people.”
Tingley hasn’t abandoned data and its power to illuminate in ways beyond the reach of anecdote and personal experience. But he’s also come to recognize that data alone doesn’t tell the full story. Missing in high-level, numbers-driven discussions of things like jobs to be lost in the fossil fuel industry are the community-level impacts of shifts in industries that underpin not just household finances, but also local and regional economies. This includes everything from concerns around infrastructure to downtown retail zones to the ability of local governments to fund things like public safety and schools.
“It’s easy to think about this in terms of fossil-fuel jobs, which are important to focus on, but what that misses is that the local economic tax base depends on it,” said Tingley. “That was not on my radar at all, but it was one of the first things that people would raise. Local economic development officials or county commissioners would show, for example, a picture of their local football stadium. I didn’t have an appreciation of how embedded and suffused all this was.”
The work culminated in 2023’s “Uncertain Futures: How to Unlock the Climate Impasse.” The volume, co-authored with Alexander Gazmararian, relies on interviews, community meetings, and other forums to present an on-the-ground view of climate change and the coming energy transition. Focusing on individuals, business owners, and community leaders, it seeks lessons from those likely to feel the transition most deeply.
“I definitely brought my quantitative knowledge and expertise to bear,” Tingley said. “But the more qualitative interview, the listening and learning, was necessary because my slice of academia did not have the bigger picture.”
Tingley’s shift in perspective was perhaps preordained. Though his work in international relations has largely followed the numbers, a shift to communities in transition echoes the changes that rocked the places where he grew up.

His family’s financial situation improved over time, but they struggled when he was young. They lived in a rural part of North Carolina where furniture-making and tobacco-growing were important industries, both of which would encounter significant challenges. The region’s furniture industry declined due to foreign competition and outsourcing, while tobacco has long been under assault because of health concerns.
He also recalls visits to his father’s family in West Virginia, driving through coal country, with blasted mountaintops and mile-long coal trains. The economic impact of the industry’s decadeslong decline was apparent even to his young eyes, as he watched weathered shacks and cars on blocks in front yards pass by his window.
Tingley has always had an interest in the environment, which he says was piqued by his family’s move to New Jersey in middle school, with the Garden State’s juxtaposition of oil refineries and urban sprawl, fertile farmland and natural Pine Barrens.
But his first academic passion was international affairs. That interest developed in the years around the Cold War’s end, when his North Carolina elementary school had him crouching under his desk during nuclear drills, the Soviet Union collapsed, and later, in high school in the mid-1990s, when protests on the streets highlighted globalization’s inequities.
“I started to become more aware of the world, learning more systematically about war and conflict and the Cold War and international trade — you read stories about protests — I realized there’s a big world out there,” Tingley said.
In the late 1990s and early 2000s, Tingley studied at the University of Rochester, earning a political science degree and a minor in math. After teaching at a private school in New York for two years, he headed to graduate school at Princeton, where he became more deeply involved in research. The work blended statistics and political science, and he began developing new statistical tools when existing ones weren’t up to handling the complex and often unruly data sets.
“You go where the knowledge is, and the knowledge is in the field. The knowledge is in the lived experience of communities and people.”
“He’s full of energy and interested in a lot of different things,” said Kosuke Imai, one of Tingley’s professors at Princeton who today is professor of government and of statistics at Harvard. “We worked on these statistical methods, but he went on to other research about international trade and how that affects domestic actors. Now he’s on to climate change. He’s quite versatile in terms of being able to understand today’s need.”
Imai said Tingley’s energy is infectious and part of what makes him a good leader. At Princeton, Tingley was captain of the department’s softball team — Imai recalls being pulled from the whiteboard to the diamond on occasion.
At Harvard, Tingley has taken on a more formal leadership role as deputy vice provost for advances in learning, where he has worked to create educational tools and resources for students and co-chaired a study on climate education.
“He has an energy that is contagious to his colleagues, friends, and collaborators, plus he works extremely hard,” Imai said. “He’s also down-to-earth, doesn’t assume anything, and is a very straightforward person.”
After graduating in 2010, Tingley came to Harvard as an assistant professor of government where, over the next few years, he became increasingly interested in climate change. After gaining tenure in 2015, he began to look for climate-related problems to explore.
His research eventually touched on the U.S. Trade Adjustment Assistance program, which provides financial support to those who have lost their jobs due to international competition. He began to wonder whether workers displaced by the clean-energy transition might benefit from something similar.
“I thought, all these fossil-fuel people are going to lose their jobs, what are we going to do about them?” Tingley said. “I ran polling on an idea that I called ‘climate adjustment assistance,’ basically asking, ‘Would you support helping fossil-fuel workers transition?’ and bipartisan majorities supported it.”
Since “Uncertain Futures,” Tingley has continued his work. He is part of a cluster of the Salata Institute for Climate and Sustainability that carries on themes in his book. The work uses community surveys, public hearings, and in-person interviews to gather experiences and opinions on the best way forward.
In August 2024, Tingley and pre-doctoral fellow Ana Martinez authored a report, “Federal Land, Leasing, Energy, and Local Public Finances,” examining how differently the nation handles proceeds from fossil fuel versus wind- and solar-generating facilities.
Proceeds from fossil-fuel extraction on federal land are shared with nearby towns and states and provide important revenue for them. But the report noted that when it comes to renewable energy, the federal government keeps all the money.
The pair conducted a nationwide poll, finding that significant majorities of voters in both parties support sending renewable revenue to local communities, a step that might help build acceptance in the most affected places.
“Just convincing someone that there’s a problem is a totally different thing from putting in place a solution that they can afford,” Tingley said. “People vote with their pocketbooks.”
Also in this series:
-
Aha moment in psych class clarifies childhood mystery
Inspires Susan Kuo’s research probing role of genetics in schizophrenia, autism
-
Seeing is believing
Personal and global history made Jeremy Weinstein want to change the world. As dean of the Kennedy School, he’s found the perfect place to do it.
-
‘Heartbreaking’ encounter inspired long view on alcohol
One encounter changed everything for researcher who hopes to help mothers and families detect and treat the effects of dangerous drinking
-
How to help urban young people progress? Nurture hope.
Youth development specialist promotes holistic approach to healing, growth of individuals, communities amid poverty, drugs, trauma
-
Her friends’ parents were dying of cancer. Then her mom got sick.
Childhood tragedy sparks Harvard researcher Jen Cruz’s quest to root out public health inequities
How we really judge AI
Suppose you were shown that an artificial intelligence tool offers accurate predictions about some stocks you own. How would you feel about using it? Now, suppose you are applying for a job at a company where the HR department uses an AI system to screen resumes. Would you be comfortable with that?
A new study finds that people are neither entirely enthusiastic nor totally averse to AI. Rather than falling into camps of techno-optimists and Luddites, people are discerning about the practical upshot of using AI, case by case.
“We propose that AI appreciation occurs when AI is perceived as being more capable than humans and personalization is perceived as being unnecessary in a given decision context,” says MIT Professor Jackson Lu, co-author of a newly published paper detailing the study’s results. “AI aversion occurs when either of these conditions is not met, and AI appreciation occurs only when both conditions are satisfied.”
The paper, “AI Aversion or Appreciation? A Capability–Personalization Framework and a Meta-Analytic Review,” appears in Psychological Bulletin. The paper has eight co-authors, including Lu, who is the Career Development Associate Professor of Work and Organization Studies at the MIT Sloan School of Management.
New framework adds insight
People’s reactions to AI have long been subject to extensive debate, often producing seemingly disparate findings. An influential 2015 paper on “algorithm aversion” found that people are less forgiving of AI-generated errors than of human errors, whereas a widely noted 2019 paper on “algorithm appreciation” found that people preferred advice from AI, compared to advice from humans.
To reconcile these mixed findings, Lu and his co-authors conducted a meta-analysis of 163 prior studies that compared people’s preferences for AI versus humans. The researchers tested whether the data supported their proposed “Capability–Personalization Framework” — the idea that in a given context, both the perceived capability of AI and the perceived necessity for personalization shape our preferences for either AI or humans.
Across the 163 studies, the research team analyzed over 82,000 reactions to 93 distinct “decision contexts” — for instance, whether or not participants would feel comfortable with AI being used in cancer diagnoses. The analysis confirmed that the Capability–Personalization Framework indeed helps account for people’s preferences.
“The meta-analysis supported our theoretical framework,” Lu says. “Both dimensions are important: Individuals evaluate whether or not AI is more capable than people at a given task, and whether the task calls for personalization. People will prefer AI only if they think the AI is more capable than humans and the task is nonpersonal.”
He adds: “The key idea here is that high perceived capability alone does not guarantee AI appreciation. Personalization matters too.”
For example, people tend to favor AI when it comes to detecting fraud or sorting large datasets — areas where AI’s abilities exceed those of humans in speed and scale, and personalization is not required. But they are more resistant to AI in contexts like therapy, job interviews, or medical diagnoses, where they feel a human is better able to recognize their unique circumstances.
“People have a fundamental desire to see themselves as unique and distinct from other people,” Lu says. “AI is often viewed as impersonal and operating in a rote manner. Even if the AI is trained on a wealth of data, people feel AI can’t grasp their personal situations. They want a human recruiter, a human doctor who can see them as distinct from other people.”
Context also matters: From tangibility to unemployment
The study also uncovered other factors that influence individuals’ preferences for AI. For instance, AI appreciation is more pronounced for tangible robots than for intangible algorithms.
Economic context also matters. In countries with lower unemployment, AI appreciation is more pronounced.
“It makes intuitive sense,” Lu says. “If you worry about being replaced by AI, you’re less likely to embrace it.”
Lu is continuing to examine people’s complex and evolving attitudes toward AI. While he does not view the current meta-analysis as the last word on the matter, he hopes the Capability–Personalization Framework offers a valuable lens for understanding how people evaluate AI across different contexts.
“We’re not claiming perceived capability and personalization are the only two dimensions that matter, but according to our meta-analysis, these two dimensions capture much of what shapes people’s preferences for AI versus humans across a wide range of studies,” Lu concludes.
In addition to Lu, the paper’s co-authors are Xin Qin, Chen Chen, Hansen Zhou, Xiaowei Dong, and Limei Cao of Sun Yat-sen University; Xiang Zhou of Shenzhen University; and Dongyuan Wu of Fudan University.
The research was supported, in part, by grants to Qin and Wu from the National Natural Science Foundation of China.
© Credit: Christine Daniloff, MIT; iStock
“Each of us holds a piece of the solution”
MIT has an unparalleled history of bringing together interdisciplinary teams to solve pressing problems — think of the development of radar during World War II, or leading the international coalition that cracked the code of the human genome — but the challenge of climate change could demand a scale of collaboration unlike any that’s come before at MIT.
“Solving climate change is not just about new technologies or better models. It’s about forging new partnerships across campus and beyond — between scientists and economists, between architects and data scientists, between policymakers and physicists, between anthropologists and engineers, and more,” MIT Vice President for Energy and Climate Evelyn Wang told an energetic crowd of faculty, students, and staff on May 6. “Each of us holds a piece of the solution — but only together can we see the whole.”
Undeterred by heavy rain, approximately 300 campus community members filled the atrium in the Tina and Hamid Moghadam Building (Building 55) for a spring gathering hosted by Wang and the Climate Project at MIT. The initiative seeks to direct the full strength of MIT to address climate change, which Wang described as one of the defining challenges of this moment in history — and one of its greatest opportunities.
“It calls on us to rethink how we power our world, how we build, how we live — and how we work together,” Wang said. “And there is no better place than MIT to lead this kind of bold, integrated effort. Our culture of curiosity, rigor, and relentless experimentation makes us uniquely suited to cross boundaries — to break down silos and build something new.”
The Climate Project is organized around six missions, thematic areas in which MIT aims to make significant impact, ranging from decarbonizing industry to new policy approaches to designing resilient cities. The faculty leaders of these missions posed challenges to the crowd before circulating among the crowd to share their perspectives and to discuss community questions and ideas.
Wang and the Climate Project team were joined by a number of research groups, startups, and MIT offices conducting relevant work today on issues related to energy and climate. For example, the MIT Office of Sustainability showcased efforts to use the MIT campus as a living laboratory; MIT spinouts such as Forma Systems, which is developing high-performance, low-carbon building systems, and Addis Energy, which envisions using the earth as a reactor to produce clean ammonia, presented their technologies; and visitors learned about current projects in MIT labs, including DebunkBot, an artificial intelligence-powered chatbot that can persuade people to shift their attitudes about conspiracies, developed by David Rand, the Erwin H. Schell Professor at the MIT Sloan School of Management.
Benedetto Marelli, an associate professor in the Department of Civil and Environmental Engineering who leads the Wild Cards Mission, said the energy and enthusiasm that filled the room was inspiring — but that the individual conversations were equally valuable.
“I was especially pleased to see so many students come out. I also spoke with other faculty, talked to staff from across the Institute, and met representatives of external companies interested in collaborating with MIT,” Marelli said. “You could see connections being made all around the room, which is exactly what we need as we build momentum for the Climate Project.”
© Photo: Ken Richardson
“Each of us holds a piece of the solution”
MIT has an unparalleled history of bringing together interdisciplinary teams to solve pressing problems — think of the development of radar during World War II, or leading the international coalition that cracked the code of the human genome — but the challenge of climate change could demand a scale of collaboration unlike any that’s come before at MIT.
“Solving climate change is not just about new technologies or better models. It’s about forging new partnerships across campus and beyond — between scientists and economists, between architects and data scientists, between policymakers and physicists, between anthropologists and engineers, and more,” MIT Vice President for Energy and Climate Evelyn Wang told an energetic crowd of faculty, students, and staff on May 6. “Each of us holds a piece of the solution — but only together can we see the whole.”
Undeterred by heavy rain, approximately 300 campus community members filled the atrium in the Tina and Hamid Moghadam Building (Building 55) for a spring gathering hosted by Wang and the Climate Project at MIT. The initiative seeks to direct the full strength of MIT to address climate change, which Wang described as one of the defining challenges of this moment in history — and one of its greatest opportunities.
“It calls on us to rethink how we power our world, how we build, how we live — and how we work together,” Wang said. “And there is no better place than MIT to lead this kind of bold, integrated effort. Our culture of curiosity, rigor, and relentless experimentation makes us uniquely suited to cross boundaries — to break down silos and build something new.”
The Climate Project is organized around six missions, thematic areas in which MIT aims to make significant impact, ranging from decarbonizing industry to new policy approaches to designing resilient cities. The faculty leaders of these missions posed challenges to the crowd before circulating among the crowd to share their perspectives and to discuss community questions and ideas.
Wang and the Climate Project team were joined by a number of research groups, startups, and MIT offices conducting relevant work today on issues related to energy and climate. For example, the MIT Office of Sustainability showcased efforts to use the MIT campus as a living laboratory; MIT spinouts such as Forma Systems, which is developing high-performance, low-carbon building systems, and Addis Energy, which envisions using the earth as a reactor to produce clean ammonia, presented their technologies; and visitors learned about current projects in MIT labs, including DebunkBot, an artificial intelligence-powered chatbot that can persuade people to shift their attitudes about conspiracies, developed by David Rand, the Erwin H. Schell Professor at the MIT Sloan School of Management.
Benedetto Marelli, an associate professor in the Department of Civil and Environmental Engineering who leads the Wild Cards Mission, said the energy and enthusiasm that filled the room was inspiring — but that the individual conversations were equally valuable.
“I was especially pleased to see so many students come out. I also spoke with other faculty, talked to staff from across the Institute, and met representatives of external companies interested in collaborating with MIT,” Marelli said. “You could see connections being made all around the room, which is exactly what we need as we build momentum for the Climate Project.”
© Photo: Ken Richardson