У нас вы можете посмотреть бесплатно Pengaruh Fine-tuning BERT Klasifikasi Teks, Pendadaran (Tesis) MTI UGM | Gusti Muhammad Riduan или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Humans currently produce a variety of data with very large amounts and sizes, such as: astronomy, business, medicine, economics, sports, weather, finance and others. Consciously or not, data is all around us, in hospitals there is medical record data, on campus there is student and lecturer data. One of the tasks to solve this problem is text classification, which is an important part of many natural language processing (NLP) applications. Currently methods or models of transformers are state-of-the-art in the text field, there are many methods used to solve the problem, such as Convolutional Neural Network (CNN), Long Short Term Memory (LSTM) and Bidirectional Encoder Representation from Transformers (BERT). Although BERT has its advantages, this model/method is general for all tasks, such as chatbot applications, machine translation, answering questions, categorizing and grouping text, and so on. In this final project, the author proposes an improvisation on the BERT model named i-BERT, in the form of applying ReLU, Sigmoid, dropout along with linear layers, then fine-tuning several hyperparameters which are proven to produce excellent performance for text classification compared to other models. which has been compared in this study, but keep in mind that to run this model requires a large amount of computation, namely using a GPU as a processor, considering that in the process there are millions of parameters that are processed.