У нас вы можете посмотреть бесплатно Dr Roman Feiman, Brown University или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
How can deep learning inform theory in psychological science? Abstract: Although Large Language Models (LLMs) are fairly new, many of the key engineering principles they rely on had already been developed by the 1980s. At that time, there was a lively debate in the new field of cognitive science about whether "connectionist" architectures could really model the human mind. Though many of the most interesting arguments were a priori, a lot of negative conclusions really depended on the empirical inadequacy of the networks of the time. Fast forward to today, and LLMs can do human-like things that no one back then (either proponents or critics) imagined possible. These successes invite us to revisit the old debates to ask: can LLMs be good (both empirically successful and explanatory) models of cognition? In this talk, I'll argue that the answer might be yes for at least one key aspect of cognition, one that has long been a flagship case in the arguments against connectionism -- logical reasoning. I will show evidence that LLMs not only match our best existing accounts of human logical reasoning on a key task, but that they actually suggest new theoretical proposals for how humans reason and give us new ways to evaluate competing proposals.