У нас вы можете посмотреть бесплатно Face Recognition using CNN (GoogleNet) или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Please follow me on Facebook: / nzamanfaruqui Connect with me on LinkedIn: / nuruzzaman-faruqui Subscribe to my new channel: / @nuruzzamanfaruqui Contact with me on Fiverr: https://www.fiverr.com/zaman_faruqui The pre-trained image classification networks are well-trained and can classify images into multiple categories with very high accuracy. However, the limitation of a pre-trained network is it can classify objects which it was trained to classify. Suppose the GoogleNet is trained to classify 1000 different objects. If there is an object other than these 1000 objects, then the network will fail to classify it. However, there is a way to reuse the existing network to train it to classify your own dataset. It is called transfer learning. The transfer learning is a method in deep learning used to use an existing network as a starting point to learn new task. In this lesson, we will learn how to apply the concept of transfer learning to train the GoogleNet to recognize human faces. Designing a new network, optimizing the architecture for maximum accuracy, specifying the effective initial weights of the hidden node is a time consuming and lengthy process. If we go for transfer learning, we get an already optimized network ready to learn new features to perform new tasks. This is how transfer learning helps us get things done using neural network with minimal effort. It means we can perform face recognition using convolutional neural network (CNN) with minimal effort my modifying GoogleNet. A convolutional neural network has multiple layers. We can divide these layers into two categories – feature learning layer and task specific layers. The feature learning layers learn the low level features such as colors, blobs and edges. On the other hand, the task specific layers learn the task specific features. For example – if the task of the network is to classify vehicle images from non-vehicle images, then the task specific layers will learn the features of the vehicles. Normally these task specific layers are the last layers of a network. In transfer learning, the task specific layers are remove from the existing network and new layers are added so that they can be trained to learn new features for some new tasks. Now these new layers are trained with new dataset, validated and tested. If the network receives training in a proper way with effective dataset, then the network become capable to classify the newly learned objects. The figure 1 illustrates the overview of the transfer learning process. The entire transfer learning process are divided into 6 steps. These steps are: 1. Preparing the dataset 2. Loading dataset, 3. Loading the pre-trained network, 4. Replacing the final layers (task specific layers), 5. Image Augmentation to Prevent Overfitting, 6. Training the network and 7. Testing the network.