У нас вы можете посмотреть бесплатно Enriching Source Code with Contextual Data for Code Completion Models Empirics или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Transformer-based pre-trained models have recently achieved great results in solving many software engineering tasks including automatic code completion which is a staple in a developer's toolkit. While many have striven to improve the code-understanding abilities of such models, the opposite, making the code easier to understand, has not been properly investigated. In this talk, I will discuss our paper "Enriching Source Code with Contextual Data for Code Completion Models: An Empirical Study", in which we investigate the impact of type annotations and different types of code comments on the performance of various code completion models. We use a TypeScript dataset to create various versions of the same input with differing amounts of contextual information to test whether this information aids model performance. I will go into detail about our process and findings, and discuss potential future avenues to make code completion models more context-aware. Speaker -- Tim van Dam Tim van Dam is a master's student at the Technical University of Delft. Having researched the influence of contextual information on code completion performance during his bachelor's thesis, he now researches language models for code as a part-time student research assistant to supervisor Maliheh Izadi (SERG, TU Delft). Meetup Group: https://www.meetup.com/machine-learni...