У нас вы можете посмотреть бесплатно LLM UNDERSTANDING: 37. Holger LYRE “Understanding AI”: Semantic Grounding in Large Language Models или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
“UNDERSTANDING AI”: SEMANTIC GROUNDING IN LARGE LANGUAGE MODELS Holger Lyre Theoretical Philosophy & Center for Behavioral Brain Sciences, U Magdeburg ISC Summer School on Large Language Models: Science and Stakes, June 3-14, 2024 Fri, June 14, 9am-10:30am EDT ABSTRACT: Do LLMs understand the meaning of the texts they generate? Do they possess a semantic grounding? And how could we understand whether and what they understand? We have recently witnessed a generative turn in AI, since generative models, including LLMs, are key for self-supervised learning. To assess the question of semantic grounding, I distinguish and discuss five methodological ways. The most promising way is to apply core assumptions of theories of meaning in philosophy of mind and language to LLMs. Grounding proves to be a gradual affair with a three-dimensional distinction between functional, social and causal grounding. LLMs show basic evidence in all three dimensions. A strong argument is that LLMs develop world models. Hence, LLMs are neither stochastic parrots nor semantic zombies, but already understand the language they generate, at least in an elementary sense. HOLGER LYRE, Professor of Theoretical Philosophy and member of the Center for Behavioral Brain Sciences (CBBS) at the University of Magdeburg. His research areas comprise philosophy of science, neurophilosophy, philosophy of AI and philosophy of physics. His publications include 4 (co-) authored and 4 (co-) edited books as well as about 100 papers. He has worked on foundations of quantum theory and gauge symmetries and made contributions to structural realism, semantic externalism, extended mind, reductionism, and structural models of the mind. See www.lyre.de Lyre, H. (2024). “Understanding AI”: Semantic Grounding in Large Language Models. arXiv preprint arXiv:2402.10992. Lyre, H. (2022). Neurophenomenal structuralism. A philosophical agenda for a structuralist neuroscience of consciousness. Neuroscience of Consciousness, 2022(1), niac012. Lyre, H. (2020). The state space of artificial intelligence. Minds and Machines, 30(3), 325-347.