Precision home robots learn with real-to-sim-to-real
CSAIL researchers introduce a novel approach allowing robots to be trained in simulations of scanned home environments, paving the way for customized household automation accessible to anyone.
CSAIL researchers introduce a novel approach allowing robots to be trained in simulations of scanned home environments, paving the way for customized household automation accessible to anyone.
Genomics and lab studies reveal numerous findings, including a key role for Reelin amid neuronal vulnerability, and for choline and antioxidants in sustaining cognition.
MAIA is a multimodal agent that can iteratively design experiments to better understand various components of AI systems.
Neural network controllers provide complex robots with stability guarantees, paving the way for the safer deployment of autonomous vehicles and industrial machines.
The approach could help engineers design more efficient energy-conversion systems and faster microelectronic devices, reducing waste heat.
New CSAIL research highlights how LLMs excel in familiar scenarios but struggle in novel ones, questioning their true reasoning abilities versus reliance on memorization.
This new tool offers an easier way for people to analyze complex tabular data.
Twelve faculty members have been granted tenure in six units across MIT’s School of Engineering.
These models, which can predict a patient’s race, gender, and age, seem to use those traits as shortcuts when making medical diagnoses.
The dedicated teacher and academic leader transformed research in computer architectures, parallel computing, and digital design, enabling faster and more efficient computation.
LLMs trained primarily on text can generate complex visual concepts through code with self-correction. Researchers used these illustrations to train an image-free computer vision system to recognize real photos.
Combining natural language and programming, the method enables LLMs to solve numerical, analytical, and language-based tasks transparently.
The method uses language-based inputs instead of costly visual data to direct a robot through a multistep navigation task.
DenseAV, developed at MIT, learns to parse and understand the meaning of language just by watching videos of people talking, with potential applications in multimedia search, language learning, and robotics.
MIT CSAIL’s frugal deep-learning model infers the hidden physical properties of objects, then adapts to find the most stable grasps for robots in unstructured environments like homes and fulfillment centers.