У нас вы можете посмотреть бесплатно Overview of Large Language Models или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Large language models have resulted in huge success for rich text generation in terms of text, speech, images, videos as well as code. In this video I will talk about a brief history of evolution of such large language models. I will start with Transformers, BERT and GPT. Then we will talk about further natural language understanding models like RoBERTa, ELECTRA, DeBERTa. We will also talk about natural language generation models like BART, and T5. Then we will talk about multilingual models like XLM, Unicoder, mBART, mT5, DeltaLM; and multimodal models like VisualBERT, vilBERT, CLIP. To be able to deploy these models in the real-world settings, model compression and distributed training became essential. Hence, we will talk about topics like distillation, adapters and mixture of experts. Recently, prompt-based models have become popular. Hence, we will talk about GPT3, InstructGPT and in general about prompting. This is the story of modern NLP from the lens of large language models. Here is the agenda: 00:00:00 Rich text generation 00:03:14 Transformers, BERT, GPT, T5 00:08:35 Natural Language Understanding: RoBERTa, ELECTRA, DeBERTa 00:13:21 Natural Language Generation: BART, T5 00:16:20 Multi-lingual models: XLM, Unicoder, mBART, mT5, DeltaLM 00:22:42 Multi-modal models: VisualBERT, vilBERT, CLIP 00:28:00 Compression and distributed training: Distillation, Adapters, Mixture of Experts 00:41:20 Prompt based models: GPT3, InstructGPT, Prompting