У нас вы можете посмотреть бесплатно NOPROP: TRAINING NEURAL NETWORKS WITHOUT BACK-PROPAGATION OR FORWARD-PROPAGATION или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Link to Research Paper: https://arxiv.org/pdf/2503.24322 Link to Colab Notebook: https://colab.research.google.com/dri... This video discusses a research paper from the University of Oxford and MA titled "No Prop: Training Neural Networks Without Back Propagation or Forward Propagation" [00:08]. It introduces a new method for training neural networks using diffusion layers, where each layer undergoes an individual diffusion process instead of the traditional forward and backward propagation [00:27]. Here are some key points from the video: No Prop Method: The technique treats each network layer as a diffusion layer, performing individual diffusion processes [00:27]. AI History Context: It touches upon the history of AI, including the impact of Minsky's 1969 paper [02:35] and the 1986 Rumelhart paper introducing backpropagation, which caused a debate about biological plausibility [03:55]. Biological Plausibility Debate: The video highlights the argument that backpropagation isn't biologically plausible, as the brain doesn't seem to use backward passes or gradient descent, motivating the search for alternatives [05:04]. No Prop Details: The method utilizes diffusion equations and Gaussian noise, with time steps corresponding to network layers [07:29]. Results: Tests on datasets like Fashion MNIST, CIFAR-10, and CIFAR-100 showed high accuracy on MNIST but lower results on the more complex CIFAR datasets [08:44]. Experimentation: The video creator tested the method using a Google Colab notebook [10:29] and also combined it with their own Zyra architecture [15:28]. Conclusion: The video reflects on the ongoing discussion about gradient descent and the potential of biologically-inspired AI methods [17:13].