У нас вы можете посмотреть бесплатно Auto-Formalization for Trustworthy Planning или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Despite the rapid advancement of AI, most systems in high-stakes applications remain primarily limited to rule-based interactions and cannot reliably plan or execute complex user tasks. Despite recent efforts in using large language models (LLMs) to plan as agents, their hallucinations and lack of verifiability undermine executability and trust, preventing real-world deployment. This proposal advances an alternative paradigm: LLM-as-formalizer. Instead of relying on LLMs to generate plans directly, we use them as a code generator to translate a user’s environment and goal into formal languages (such as PDDL) that can be deterministically solved by off-the-shelf solvers. This neurosymbolic approach combines the flexibility of LLMs with the reliability of symbolic systems, offering a pathway toward trustworthy, generalizable planning. In this talk, I will discuss a few advances in 2025 including a comprehensive evaluation of LLM's auto-formalization ability under a unified methodological framework, and also ongoing work on iterative and multi-agent planning in partially observable environments. Li "Harry" Zhang is an assistant professor at Drexel University, focusing on Natural Language Processing (NLP) and artificial intelligence (AI). He obtained his PhD degree from the University of Pennsylvania in 2024, advised by Prof. Chris Callison-Burch and chaired by Prof. Dan Roth. He was a year-long intern in 2023 at the Allen Institute for Artificial Intelligence. He obtained his Bachelor's degree from the University of Michigan in 2018, mentored by Prof. Rada Mihalcea and Prof. Dragomir Radev. His research agenda use large language models (LLMs) as auto-formalizers for trustworthy problem-solving, accepted to the AAAI 2026 New Faculty Highlights program. He has published more than 30 peer-reviewed papers in NLP and AI conferences, such as ACL, EMNLP, and NAACL, that have been cited more than 3,000 times. He also consistently serves as Area Chair, Session Chair, and reviewer in those venues. Outside academia, he is a sponsored musician, producer, and content creator having over 60,000 subscribers across streaming platforms.