У нас вы можете посмотреть бесплатно Trustworthy World Models for Safe & Generalist Robots - Anirudha Majumdar 0213 или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Abstract: World models in the form of action-conditioned video generation models have the potential to serve as general-purpose simulators for robotics. Their ability to generate photorealistic observations, simulate complex physical interactions, and be improved with data make them an attractive alternative to traditional physics-based simulation for policy evaluation, data generation, and inference-time planning. However, the data-driven nature of world models also leads to outputs that are not always trustworthy. Indeed, current models exhibit a variety of hallucinations: objects can appear or disappear, deform in unrealistic ways, or move in a manner that defies physics. In this talk, I will first briefly highlight our work at Google DeepMind on using Veo to evaluate Gemini Robotics policies. I will then argue that knowing when and where to trust the generations of world models is critical in robotics applications. I will describe very recent work on training world models that output dense confidence estimates at the subpatch (channel) level, precisely localizing the uncertainty in each generated video frame. By training with a proper scoring rule, we ensure that the resulting uncertainty estimates are well calibrated. Through extensive experiments on large-scale robotics datasets (DROID and Bridge), we demonstrate how our method allows us to identify uncertainty in a calibrated manner, while also enabling out-of-distribution detection in unseen domains. To our knowledge, this is the first work on calibrated uncertainty quantification for action-conditioned video models. Bio: Anirudha Majumdar is an Associate Professor at Princeton University in the Mechanical and Aerospace Engineering (MAE) department, and founding co-Director of the Princeton Robotics Initiative. He also holds a 20% research scientist position at Google DeepMind in the Robotics Safety & Alignment team. Majumdar received a Ph.D. in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology in 2016, and a B.S.E. in Mechanical Engineering and Mathematics from the University of Pennsylvania in 2011. Subsequently, he was a postdoctoral scholar at Stanford University from 2016 to 2017 at the Autonomous Systems Lab in the Aeronautics and Astronautics department. Majumdar is a recipient of the Sloan Fellowship, ONR Young Investigator Program (YIP) award, NSF CAREER award, Google Faculty Research Award (twice), Amazon Research Award (twice), Young Faculty Researcher Award from the Toyota Research Institute, Best Student Paper Award (as advisor) at the Conference on Robot Learning (CoRL), Paper of the Year Award from the International Journal of Robotics Research (IJRR), Best Conference Paper Award at the International Conference on Robotics and Automation (ICRA), Alfred Rheinstein Faculty Award (Princeton), and the Excellence in Teaching Award (Princeton SEAS).