У нас вы можете посмотреть бесплатно AlexNet Deep Neural Network Architecture Explained или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
👨💻 to get started with AI engineering, check out this Scrimba course: https://scrimba.com/the-ai-engineer-p... AlexNet is an architecture that is often mentioned as being the catalyst that started the AI boom in 2012. Here we will discuss what was novel at the time and go through the paper to understand it! Credit Music from Youtube Creator Music (A Brand New Start by TrackTribe) and the image is from the Deep Learning with Pytorch book! Greatly recommend the book! The code for the AlexNet can be found here: [https://code.google.com/archive/p/cud...](https://code.google.com/archive/p/cud...) Table of Content Introduction: 0:00 Background: 0:15 Dataset: 1:17 AlexNet Architecture: 2:22 Training Specification: 7:45 Result: 10:28 Conclusion: 14:42 At the time of the writing of the paper, deep learning architecture weren't really used for object recognition task. Through a series of improvement by the authors Alex Krizhevsky, Ilya Sutskever and Geoffrey E. Hinton they were able to not only train such a model properly, but also show its superiority through the object recognition competition. Abstract We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5%and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers,and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully-connected layers we employed a recently-developed regularization method called “dropout”that proved to be very effective. We also entered a variant of this model in theILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%,compared to 26.2% achieved by the second-best entry. Reference Paper: [https://papers.nips.cc/paper/4824-ima...](https://papers.nips.cc/paper/4824-ima...) ---- Join the Discord for general discussion: / discord ---- Follow Me Online Here: Twitter: / codethiscodeth1 GitHub: https://github.com/yacineMahdid LinkedIn: / yacine-mahdid-809425163 Instagram: / yacine_mahdid ___ Have a great week! 👋