У нас вы можете посмотреть бесплатно Shot #23 [Hebrew]: Paper to Code - Meta Chains of Thought или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Large Reasoning Models (LRMs) are the new gold standard for the frontier LLMs. While being relatively new (OpenAI’s o1 was the first model that was trained to reason, at 2024), the root of their power traces back to an older and simple concept: Chain-of-Thought (CoT). In this shot, we implement a classic paper that took CoT to the next level, called "Answering Questions by Meta-Reasoning over Multiple Chains of Thought". The premise is simple but deadly effective: Execute multiple independent reasoning paths and smartly aggregate them to yield the best results. Even though this paper is more than two years old, it’s more relevant today than ever. Watch as we implement it from scratch and show you how to squeeze maximum juice out of LLMs with one easy trick! This joint work is a collaboration with Prof. Jonathan Berant from Tel Aviv University, whose research lab authored the paper mentioned above. 👉🏻 Read the full paper: https://arxiv.org/abs/2304.13007 👉🏻 Join our Telegram group for further discussions and ideas: https://t.me/one_shot_learning 👉🏻 Download the notebook: https://github.com/one-shot-learning/... 👉🏻 Subscribe to our mailing list: https://www.oneshotlearning.io/#newsl... 👉🏻 Visit us at https://www.oneshotlearning.io