У нас вы можете посмотреть бесплатно [Explainable AI] Statistical Mechanics of Explainable Artificial Intelligence. Hebbian Neural Nets. или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
We’ve all heard the same old trope: AI is a 'black box.' We feed it data, it spits out an answer, and we just... cross our fingers and hope it's right. But what if we could look under the hood and see the gears turning—not as code, but as physical phases of matter? Today, we’re diving into a fascinating paper titled 'Statistical Mechanics of Explainable Artificial Intelligence.' It’s a deep dive into Dense Hebbian neural networks. If the standard neural network is a quiet one-on-one conversation between two neurons, these networks are more like a high-energy group chat where everyone is talking to everyone else at the same time." The "Ultra" Factor. The researchers found that by letting these neurons interact in groups larger than just pairs, they unlocked what they call ultra-storage and ultra-tolerance. We’re talking about memory capacities that make your standard Hopfield networks look like a sticky note in a hurricane. But, as with all things in physics, there's no free lunch. To get this massive computational power, you need a mountain of training data. It’s a classic trade-off: complexity vs. information. references: Dense Hebbian neural networks: a replica symmetric picture of supervised learning https://arxiv.org/pdf/2212.00606