To keep hardware safe, cut out the code’s clues
New “Oreo” method from MIT CSAIL researchers removes footprints that reveal where code is stored before a hacker can see them.
New “Oreo” method from MIT CSAIL researchers removes footprints that reveal where code is stored before a hacker can see them.
New faculty member Kaiming He discusses AI’s role in lowering barriers between scientific fields and fostering collaboration across scientific disciplines.
MIT researchers developed a new approach for assessing predictions with a spatial dimension, like forecasting weather or mapping air pollution.
The consortium will bring researchers and industry together to focus on impact.
By automatically generating code that leverages two types of data redundancy, the system saves bandwidth, memory, and computation.
MIT CSAIL Principal Research Scientist Una-May O’Reilly discusses how she develops agents that reveal AI models’ security weaknesses before hackers do.
Starting with a single frame in a simulation, a new system uses generative AI to emulate the dynamics of molecules, connecting static molecular structures and developing blurry pictures into videos.
Rapid development and deployment of powerful generative AI models comes with environmental consequences, including increased electricity demand and water consumption.
Inspired by the human vocal tract, a new AI model can produce and understand vocal imitations of everyday sounds. The method could help build new sonic interfaces for entertainment and education.
The Thermochromorph printmaking technique developed by CSAIL researchers allows images to transition into each other through changes in temperature.
Using this model, researchers may be able to identify antibody drugs that can target a variety of infectious diseases.
Biodiversity researchers tested vision systems on how well they could retrieve relevant nature images. More advanced models performed well on simple queries but struggled with more research-specific prompts.
Five MIT faculty and staff, along with 19 additional alumni, are honored for electrical engineering and computer science advances.
With models like AlphaFold3 limited to academic research, the team built an equivalent alternative, to encourage innovation more broadly.
Researchers at MIT, NYU, and UCLA develop an approach to help evaluate whether large language models like GPT-4 are equitable enough to be clinically viable for mental health support.