У нас вы можете посмотреть бесплатно Introduction to Neural Networks (Leture 22) или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
]#AI #PyTorch #DeepLearning #NeuralNetworks #Tensors #Micrograd #MetaAI Welcome to the twenty-second lecture of my Deep Learning series! 🧠⚡ In the previous lectures, we built our own Autograd engine ("Micrograd") from scratch working with scalar values. It was educational, it was raw, and it helped us build intuition. Today, we graduate. We are taking our logic and porting it to the industry-standard framework used by researchers at Meta, Tesla, and OpenAI: PyTorch. This lecture bridges the gap between understanding how things work and using the tools that make them work efficiently in production. We discover that PyTorch is essentially just a high-performance, array-based version of the engine we just built! In this video, we cover: ✅ Scalar vs. Tensor Engines: We discuss the difference between our educational engine (scalar-level, slow, easy to read) and PyTorch (tensor-level, parallelized, production-ready). ✅ PyTorch History & Trivia: A bit of storytelling about the origins of PyTorch, the FAIR (Facebook AI Research) lab, Yann LeCun, and the contributions of Soumith Chintala. ✅ Tensors 101: We install PyTorch (pip install torch) and learn the basics: creating tensors, checking shapes, and understanding Data Types. ✅ The Precision Trap (Float32 vs Float64): We observe a crucial detail—Python floats are 64-bit (double), while PyTorch defaults to 32-bit. To make our comparisons exact, we convert our tensors using .double(). ✅ The Magic Switch (requires_grad=True): We learn how to tell PyTorch to track gradients for specific variables (Leaf Nodes), mirroring exactly what we did in our Value class. ✅ The Great Comparison: We rebuild the exact same 2-neuron computational graph from the previous lecture using PyTorch. We perform the forward pass, call .backward(), and inspect the gradients. ✅ Validation: The moment of truth—we compare the gradients calculated by PyTorch against our Micrograd engine. The result? They match perfectly! By the end of this lecture, you will see that PyTorch isn't magic; it's just efficient engineering built on the same calculus principles we've mastered over the last 20 videos. Resources: 🔗 GitHub Repository (Code & Notes): 🔗 Follow me on Instagram: / gautamgoel978 Subscribe and hit the bell icon! 🔔 Now that we've verified the engine, we are ready to build higher-level abstractions. In the next lecture, we will implement the Neuron, Layer, and MLP classes to create our own Neural Network library. Let's keep building! 📉🚀 #artificialintelligence #DeepLearning,#MachineLearning,#PyTorch,#Python,#NeuralNetworks,#Backpropagation,#Micrograd,#Coding,#DataScience,#MetaAI,#SoumithChintala,#Autograd,#Programmers,#100DaysOfCode,#MathForML,#HindiTutorial,#Technology,#GenerativeAI,#SoftwareEngineering #deeplearning #Python #Micrograd #DataScience #MachineLearning #Hindi #AI #Backpropagation #Coding #MetaAI #SoumithChintala #autograd #ArtificialIntelligence,#DeepLearning,#MachineLearning,#PyTorch,#Python,#NeuralNetworks,#Backpropagation,#Micrograd,#Coding,#DataScience,#MetaAI,#SoumithChintala,#Autograd,#Programmers,#100DaysOfCode,#MathForML,#HindiTutorial,#Technology,#GenerativeAI,#SoftwareEngineering #ArtificialIntelligence,#DeepLearning,#MachineLearning,#PyTorch,#Python,#NeuralNetworks,#Backpropagation,#Micrograd,#Coding,#DataScience,#MetaAI,#SoumithChintala,#Autograd,#Programmers,#100DaysOfCode,#MathForML,#HindiTutorial,#Technology,#GenerativeAI,#SoftwareEngineering