У нас вы можете посмотреть бесплатно AI teamwork and tooling for research software engineers или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Presentations by Olivia Newton (University of Montana) and Andrew Schmeder (LBNL) on March 3 in the Consortium for the Advancement of Scientific Software (CASS; https://cass.community/) User/Developer Experience working group. The speakers discuss research software engineers, how they work in teams with AI tools, and what performance they can expect from AI tools for science. Visit the working group website to learn about upcoming events more: https://cass.community/working-groups... Olivia Newton presents: A survey of teamwork, AI, and their integration in scientific computing: Preliminary findings We report results of survey and interview research conducted with 79 scientists and software professionals to better understand current practices, needs, and challenges related to teamwork and AI use in scientific computing. Study participants reported mixed awareness of best practices for effective teamwork and expressed interest in training on topics across team science, software engineering, and AI use. Although most participants reported using AI, less than half indicated that their teams engage in conversations about policies for AI use in their collaborative work. Our results further suggest that there is no clear consensus on the best applications for AI in scientific computing to date. Lastly, we discuss the ways that AI is altering team dynamics and development processes. Together, these findings highlight opportunities to strengthen cross-disciplinary collaboration and team-based scientific software practices. Andrew Schmeder presents: Scientific Coding with AI - SciCode Bench Insights & Agentic Workflows Can LLMs actually perform “PhD-level” tasks - specifically in scientific coding - as claimed by AI companies? Recent advances have enabled the majority of UI and infrastructure code to be automated using AI, but can it write scientific code? In this short talk, we will review the results from running the SciCode benchmark on 60 different model configurations over the past 9 months on Berkeley Lab’s CBorg AI inference gateway. Insights regarding evals, optimizing inference costs, performance of open-weight versus commercial flagship models, and measuring the rate of model improvement will be discussed. In the second half, we will look at a demo of an autonomous scientific agent that can perform data discovery, data transfers and data analysis via a chat interface.