Knowable Magazine
Knowable Magazine reporter Katherine Ellison spotlights Future You, a new program developed by researchers at MIT that “offers young people a chance to chat with an online, AI-generated simulation of themselves at age 60.”
Knowable Magazine reporter Katherine Ellison spotlights Future You, a new program developed by researchers at MIT that “offers young people a chance to chat with an online, AI-generated simulation of themselves at age 60.”
Prof. Daron Acemoglu highlights the importance of adopting alternative technologies in the face of AI advancements, reports Jared Newman for Fast Company. “We need investment for alternative approaches to AI, and alternative technologies, those that I would say are more centered on making workers more productive, and providing better information to workers,” says Acemoglu.
Using Cortico, a nonprofit collaboration with the MIT Center for Constructive Communication that aims “to facilitate conversations and spot themes across a large number of conversations,” NPR’s Morning Edition began a new project to learn more about communities, big and small, across the United States. “This project yielded hours and hours of taped conversations,” reports NPR. “So we used Cortico's AI tools and a prototype from MIT to search for shared themes across all the recordings so that we could listen more closely.”
Forbes reporter Joe McKendrick spotlights a study by researchers from the MIT Center for Collective Intelligence evaluating “the performance of humans alone, AI alone, and combinations of both.” The researchers found that “human–AI systems do not necessarily achieve better results than the best of humans or AI alone,” explains graduate student Michelle Vaccaro and her colleagues. “Challenges such as communication barriers, trust issues, ethical concerns and the need for effective coordination between humans and AI systems can hinder the collaborative process.”
In an interview with CNBC, Prof. Max Tegmark highlights the importance of increased AI regulation, specifically as a method to mitigate potential harm from large language models. “All other technologies in the United States, all other industries, have some kind of safety standards,” says Tegmark. “The only industry that is completely unregulated right now, which has no safety standards, is AI.”
Researchers from MIT and elsewhere have found that “AI doesn’t even understand itself,” reports Peter Coy for The New York Times. The researchers “asked AI models to explain how they were thinking about problems as they worked through them,” writes Coy. “The models were pretty bad at introspection.”
Liquid AI, an MIT startup, is developing technology that “holds the same promise of writing, analyzing, and creating content as its rivals while using far less computing power,” reports Aaron Pressman for The Boston Globe.
Prof. David Autor has been named a Senior Fellow in the Schmidt Sciences AI2050 Fellows program, and Profs. Sara Beery, Gabriele Farina, Marzyeh Ghassemi, and Yoon Kim have been named Early Career AI2050 Fellows, reports Michael T. Nietzel for Forbes. The AI2050 fellowships provide funding and resources, while challenging “researchers to imagine the year 2050, where AI has been extremely beneficial and to conduct research that helps society realize its most beneficial impacts,” explains Nietzel.
Prof. Daniela Rus, director of CSAIL, speaks with NBC Boston reporter Colton Bradford about her work developing a new AI system aimed at making grocery shopping easier, more personalized and more efficient. “I think there is an important synergy between what people can do and what machines can do,” says Rus. “You can think of it as machines have speed, but people have wisdom. Machines can lift heavy things, but people can reason about what to do with those heavy things.”
Writing for The New York Times, Prof. Anant Agarwal shares AI’s potential to “revolutionize education by enhancing paths to individual students in ways we never thought possible.” Agarwal emphasizes: “A.I. will never replace the human touch that is so vital to education. No algorithm can replicate the empathy, creativity and passion a teacher brings to the classroom. But A.I. can certainly amplify those qualities. It can be our co-pilot, our chief of staff helping us extend our reach and improve our effectiveness.”
Using a new technique developed to examine the risks of multimodal large language models used in robots, MIT researchers were able to have a “simulated robot arm do unsafe things like knocking items off a table or throwing them by describing actions in ways that the LLM did not recognize as harmful and reject,” writes Will Knight for Wired. “With LLMs a few wrong words don’t matter as much,” explains Prof. Pulkit Agrawal. “In robotics a few wrong actions can compound and result in task failure more easily.”
Researchers from MIT and elsewhere have compared 12 large language models (LLMs) against 925 human forecasters for a three-month forecasting tournament to help predict real-world events, including geopolitical events, reports Tomas Gorny for Forbes. "Our results suggest that LLMs can achieve forecasting accuracy rivaling that of human crowd forecasting tournaments,” the researchers explain.
Forbes reporter John M. Bremen spotlights a new study by MIT researchers that “shows the most skilled scientists and innovations benefitted the most from AI – doubling their productivity – while lower-skilled staff did not experience similar gains.” The study “showed that specialized AI tools foster radical innovation at the technical level within a domain-specific scope, but also risk narrowing human roles and diversity of thought,” writes Bremen.
Writing for Forbes, Senior Lecturer Guadalupe Hayes-Mota SB '08, MS '16, MBA '16 shares insight into how entrepreneurs can use AI to build successful startups. AI “can be a strategic advantage when implemented wisely and used as a tool to support, rather than replace, the human touch,” writes Hayes-Mota.
Prof. Armando Solar-Lezama speaks with New York Times reporter Sarah Kessler about the future of coding jobs, noting that AI systems still lack many essential skills. “When you’re talking about more foundational skills, knowing how to reason about a piece of code, knowing how to track down a bug across a large system, those are things that the current models really don’t know how to do,” says Solar-Lezama.