У нас вы можете посмотреть бесплатно Rio Yokota: Scaling Laws in HPC and AI - My Take on the Bitter Lesson. или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
The Talk given by Rio Yokota KUIS AI Talks on December 23,2025 𝐓𝐢𝐭𝐥𝐞: Scaling Laws in HPC and AI - My Take on the Bitter Lesson 𝐀𝐛𝐬𝐭𝐫𝐚𝐜𝐭: When computers were less powerful and data was less abundant, many sophisticated models were invented to simulate (in HPC) or learn (in AI) the complex world around us. As the capability of computers has improved at an exponential rate for the past fifty years, brute force computing of simpler models has made sophisticated models obsolete in some areas. For example, for simulating the physics of fluids, sophisticated RANS models have been replaced by simpler LES and DNS models. Similarly, simple deep neural networks with automatic differentiation have replaced mathematically sophisticated statistical learning methods. However, while the transition from RANS to LES happened nearly two decades ago, scaling laws in AI come at a time when Moore’s law is approaching its end. This difference has many implications ,ranging from the design of computer architectures to the dynamics between sophisticated modeling versus brute force computing. Understanding the similarities and differences between these scaling laws is the key to predicting the dynamics between HPC and AI in the upcoming years. Short Bio: Rio Yokota is a Professor at the Supercomputing Research Center, Institute of Integrated Research, Institute of Science Tokyo. He also leads the AI for Science Foundation Model Research Team at RIKEN Center for Computational Science. His research interests lie at the intersection of high performance computing, machine learning, and linear algebra. He has been optimizing algorithms on GPUs since 2007, and was part of a team that received the Gordon Bell prize in 2009 using the first GPU supercomputer. More recently, he has been leading distributed training efforts on Japanese supercomputers such as ABCI, TSUBAME, and Fugaku. He is the co-developer of the Japanese LLM Swallow, and LLM-jp. He is also involved in the organization of multinational collaborations such as ADAC and TPC.