У нас вы можете посмотреть бесплатно CUDA and Application to Task-Based Programming (part 1) | Eurographics'2021 Tutorial или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Part 2: • CUDA and Application to Task-Based Program... Since its inception, the CUDA programming model has been continuously evolving. Because the CUDA toolkit aims to consistently expose cutting-edge capabilities for general purpose compute jobs to its users, the added features in each new version reflect the rapid changes that we observe in GPU architectures. Over the years, the changes in hardware, a growing scope of built-in functions and libraries, as well as an advancing C++ standard compliance have expanded the design choices when coding for CUDA, and significantly altered the directives to achieve peak performance. In this tutorial, we give a thorough introduction to the CUDA toolkit, demonstrate how a contemporary application can benefit from recently introduced features and how they can be applied to task-based GPU scheduling in particular. To provide a profound understanding of how CUDA applications can achieve peak performance, Part 1 of this tutorial outlines the modern CUDA architecture. Following a basic introduction, we expose how language features are linked to - and constrained by - the underlying physical hardware components. Furthermore, we describe common applications for massively parallel programming, offer a detailed breakdown of potential issues and list ways to mitigate performance impacts. An exemplary analysis of PTX and SASS snippets illustrates how code patterns in CUDA are mapped to actual hardware instructions. Course Notes and Code samples: Please find them on https://cuda-tutorial.github.io/ Syllabus: Fundamentals of CUDA History of the GPU The CUDA execution model Kernels, grids, blocks and warps Building CUDA applications Debugging and Profiling Common CUDA libraries Understanding the GPU hardware The CUDA memory model Warp scheduling and latency hiding Independent thread scheduling Performance metrics and optimization Basics of PTX and SASS Michael Kenzel, Bernhard Kerbl, Martin Winter, Markus Steinberger https://diglib.eg.org/handle/10.2312/...