У нас вы можете посмотреть бесплатно Lets Build our own PyTorch Part 1: Running GPU Ops in Numpy! или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Code: https://github.com/priyammaz/MyTorch/... This is going to be the most boring video in the series so feel free to skip. But I wanted to be thorough so here you go! All I want is to be able to run numpy methods on CUDA tensors. In PyTorch we can have tensors on CPU or GPU and we can perform operations with them. Well we have Numpy for CPU and Cupy for GPU. Can we make one homogenous system that just puts it all together? So this video is just an example of how we can wrap our operations to pipe data through Cupy or Numpy based on the device, and create an array system very close to PyTorch! Timestamps: 00:00:00 - Introduction 00:00:45 - Overview 00:02:00 - Check Cupy install 00:03:10 - What do we want? 00:04:10 - Start the Array Module 00:04:48 - Manage Devices 00:12:00 - Manage Dtype 00:16:40 - Moving Arrays between Devices 00:22:40 - Casting to correct Dtype 00:23:50 - Store access information 00:25:26 - Defining Array Properties 00:30:00 - Write .astype() method to convert Dtype 00:32:00 - Write .to() method to move Arrays between Devices 00:35:30 - Write .asnumpy() method 00:36:40 - Adding (dunder) Operations to Array Module 00:52:30 - Pretty Printing our Arrays 00:57:30 - Adding support for all Numpy/Cupy methods 01:12:00 - getitem/setitem/getattr 01:17:35 - Creating New Arrays (Array Factory) 01:25:45 - Bigger Picture, why do all this? 01:27:30 - How it fits into our Tensor? Socials! X / data_adventurer Instagram / nixielights Linkedin / priyammaz Discord / discord 🚀 Github: https://github.com/priyammaz 🌐 Website: https://www.priyammazumdar.com/