У нас вы можете посмотреть бесплатно Accelerating and scaling MLIP inference | LeMaterial Reading Group или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Paper Link: https://www.arxiv.org/abs/2601.21147 MLIPs face two primary bottlenecks preventing them from reaching realistic simulation scale: inference time and memory consumption. In this talk, I will discuss two recent works focused on addressing both problems: 1) DistMLIP (ICLR 2026), a distributed inference platform for MLIPs predicated upon zero-redundancy 1-hop graph partitioning and 2) smooth dynamic cutoffs, a novel method to effectively prune edges of the underlying atomic graph while maintaining accuracy and accelerating inference. Kevin is a second year PhD student at Carnegie Mellon University researching machine learning methods in scaling molecular simulation as well as LLM agentic post-training for scientific discovery applications. On molecular simulation, he is particularly interested in developing machine learning tools to simulate atomic systems of real-world sizes and ultra-long timescales. LeMaterial Reading Group is a recurring gathering where we discuss recent papers at the intersection of AI, chemistry and materials science