У нас вы можете посмотреть бесплатно [PhD Proposal] Automated Calibration of LLM-Agent Based Educational Simulations или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
We are moving from the "Classical Era" of simulations to the "Generative Era," but our methods for controlling AI agents are stuck in the past. "Off-the-shelf" LLM capabilities aren't enough—simulations need rigorous authorability to be scientifically valid. This proposal defense outlines a complete framework for moving from brittle manual prompting to robust, automated calibration. The Research Journey: 1. The Failure of Manual Authoring: Empirical evidence showing why hand-written prompts and schemas fail to scale or provide necessary cognitive scaffolding. 2. HypMix: A framework for modular, hypothesis-based calibration that allows for reusable behavioral priors. 3. The Consistency Gap: Results from the "Trust Game" revealing that LLM agents often fail to act consistently with their stated beliefs, proving that "roleplay" is not enough. 4. Proposed Work: Reframing calibration as an automated planning problem. By treating behavioral goals (like "balanced participation") as trajectory constraints, we can force agents to adhere to complex educational scenarios like pair programming.