У нас вы можете посмотреть бесплатно Investigating Methods to Improve Language Model Integration for Attention-based Encoder-Decoder ... или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Title: Investigating Methods to Improve Language Model Integration for Attention-based Encoder-Decoder ASR Models - (Oral presentation) Authors: Mohammad Zeineldeen (RWTH Aachen University, Germany), Aleksandr Glushko (RWTH Aachen University, Germany), Wilfried Michel (RWTH Aachen University, Germany), Albert Zeyer (RWTH Aachen University, Germany), Ralf Schlüter (RWTH Aachen University, Germany), Hermann Ney (RWTH Aachen University, Germany) Category: Neural Network Training Methods and Architectures for ASR Abstract: Attention-based encoder-decoder (AED) models learn an implicit internal language model (ILM) from the training transcriptions. The integration with an external LM trained on much more unpaired text usually leads to better performance. A Bayesian interpretation as in the hybrid autoregressive transducer (HAT) suggests dividing by the prior of the discriminative acoustic model, which corresponds to this implicit LM, similarly as in the hybrid hidden Markov model approach. The implicit LM cannot be calculated efficiently in general and it is yet unclear what are the best methods to estimate it. In this work, we compare different approaches from the literature and propose several novel methods to estimate the ILM directly from the AED model. Our proposed methods outperform all previous approaches. We also investigate other methods to suppress the ILM mainly by decreasing the capacity of the AED model, limiting the label context, and also by training the AED model together with a pre-existing LM. For more details and PDF version of the paper visit: https://www.isca-speech.org/archive/i... d03s01t05tlk