У нас вы можете посмотреть бесплатно LLM Understanding: 23. David CHALMERS или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Stochastic Parrots or Emergent Reasoners: Can Large Language Models Understand? David Chalmers Center for Mind, Brain, and Consciousness, NYU ISC Summer School on Large Language Models: Science and Stakes, June 3-14, 2024 Mon, June 10, 1:30pm-3pm EDT Abstract: Some say large language models are stochastic parrots, or mere imitators who can’t understand. Others say that reasoning, understanding and other humanlike capacities may be emergent capacities of these models. I’ll give an analysis of these issues, analyzing arguments for each view and distinguishing different varieties of “understanding” that LLMs may or may not possess. I’ll also connect the issue of LLM understanding DAVID CHALMERS is University Professor of Philosophy and Neural Science and co-director of the Center for Mind, Brain, and Consciousness at New York University. He is the author of The Conscious Mind (1996), Constructing The World (2010), and Reality+: Virtual Worlds and the Problems of Philosophy (2022). He is known for formulating the “hard problem” of consciousness, and (with Andy Clark) for the idea of the “extended mind,” according to which the tools we use can become parts of our minds. Chalmers, D. J. (2023). Could a large language model be conscious?. arXiv preprint arXiv:2303.07103. Chalmers, D.J. (2022) Reality+: Virtual worlds and the problems of philosophy. Penguin Chalmers, D. J. (1995). Facing up to the problem of consciousness. Journal of Consciousness Studies, 2(3), 200-219. Clark, A., & Chalmers, D. (1998). The extended mind. Analysis, 58(1), 7-19.