У нас вы можете посмотреть бесплатно 1000x Speed & 24-Bit Precision: Analogue RRAM Destroys the Digital Computing Bottleneck! или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
(Video Opening Scene: Energetic, fast-paced music. Graphics displaying complex matrix equations and RRAM chip microstructures flash across the screen.) *(Introduction Script)* *Host:* Hey everyone, and welcome back to the channel! Today, we're diving deep into a monumental challenge that has long plagued high-performance computing: **how to solve massive matrix equations both precisely and efficiently**. Whether you're training cutting-edge neural networks, running scientific simulations, or detecting signals in 6G wireless systems, solving equations like $Ax=b$ is absolutely central. But traditional digital processors are hitting a wall. They suffer from computationally expensive complexity, often scaling as $O(N^3)$ for inversion, and they're bottlenecked by the separation of processor and memory in the conventional von Neumann architecture. *(The Analogue Solution)* Imagine if your memory could calculate at the speed of light! That's the promise of *Analogue Matrix Computing (AMC)* using Resistive Random-Access Memory, or RRAM. These RRAM arrays act as physical matrices, where the conductance of each device becomes an element of the matrix, allowing matrix-vector multiplication (MVM) in essentially one step. *(The Breakthrough: Precision and Speed)* For years, the central bottleneck of analogue computing was *precision**. But researchers have now described a **precise and scalable analogue matrix inversion solver* that overcomes this limit. Their breakthrough approach uses an iterative algorithm combining analogue low-precision matrix inversion (LP-INV) and analogue high-precision matrix-vector multiplication (HP-MVM). This whole scheme is implemented using **3-bit RRAM chips fabricated in a foundry**. By scaling this technique with the BlockAMC algorithm, they successfully solved inversion problems involving 16x16 real-valued matrices with an astonishing **24-bit fixed-point precision**—which is comparable to FP32 digital processors. *(The Performance Edge)* Now for the truly mind-blowing part: This analogue computing approach isn't just precise; it's blisteringly fast and efficient! When applied to signal detection in complex, high-data systems like massive **MIMO wireless communications**, the HP-INV solver achieved performance identical to FP32 digital processors in just three iterations. And benchmarking projections are revolutionary: This analogue RRAM solver could potentially offer *1,000 times higher throughput* and *100 times better energy efficiency* than state-of-the-art digital processors while maintaining the same level of precision. *(Conclusion)* This is the end of the precision bottleneck for analogue computing and a huge leap forward for data-intensive applications like 6G. Stick around as we dive into the iterative refinement process, how they used bit-slicing to guarantee high precision, and what this means for the future of processing large-scale matrices! Don't forget to hit that subscribe button!