У нас вы можете посмотреть бесплатно Rendering the Mandelbrot set in real-time with AVX-512 или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Real-time Mandelbrot deep zoom renderer running entirely on CPUs — no GPU acceleration. The scene is rendered at 1080p window resolution on a dual-socket Intel Xeon Platinum 8480 system (112 cores / 224 threads) using AVX-512 vectorization and ISPC. Zoom depth is approximately 10¹⁴⁷× magnification, achieved through a hybrid approach combining: • Perturbation theory for fast per-pixel iteration in hardware • High-precision reference orbit computation • A fully FP64 (double-precision) rendering pipeline • Massively parallel tile rendering across hundreds of hardware threads Pipeline overview: 1) CPU render target (system memory framebuffer) Each frame is rendered into a linear 32-bit pixel buffer on the CPU. Work is decomposed into small fixed-size tiles to keep work load balanced. 2) Persistent worker-thread pool A fixed set of worker threads uses a tile-claiming scheme (workers grab the next tile) to reduce synchronization overhead as much as possible. 3) SIMD kernel in ISPC (AVX-512 / SIMD8 / double-precision) The hot loop is implemented in ISPC targeting AVX-512. Each invocation of the kernel renders a single tile. Iteration is done in FP64. 4) Deep zoom math: perturbation + high-precision reference orbit To make extreme magnification feasible, the renderer combines a high-precision reference orbit computed at the view center and a fast per-pixel perturbation path using the reference data. This avoids the cost of arbitrary-precision math in the inner loop while remaining stable at very deep zoom. The reference orbit is computed at startup to avoid any run-time overhead. 5) GPU presentation (D3D11 + DirectComposition) The main window is created without a redirection bitmap, allowing the application to present directly into a GPU swap chain rather than through the legacy DWM/GDI surface path. The CPU framebuffer is copied into a staging texture, then to a GPU-resident texture and rendered directly to the window as a full-screen quad. The swap chain is bound to a DirectComposition visual, effectively fusing the D3D render target into the compositor’s visual tree. And yes, I certainly did cheat a bit by using a dual-socket, 112 core system to get near real-time performance out of 65K iterations but seeing how far it could scale with a monster CPU was what made it so interesting in the first place.