У нас вы можете посмотреть бесплатно Episode 7: Creating Ceph Pools in croit – Replication, Erasure Coding & Built-in Best Practices или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
In Episode 7 of our croit series, we walk through how to create and tune storage pools for optimal durability, performance, and capacity efficiency. -Understand what pools are and how they structure your Ceph storage -Create a pool in seconds using the intuitive croit GUI—no Ceph expertise required -Choose between replication and erasure coding for data durability and space savings -Get recommended erasure coding profiles like 4+2, 4+3, or 8+2 for performance consistency -Visualize failure domains and overhead based on your cluster topology -Set the optimal number of placement groups (PGs) for performance and balance -Assign Crush rules to target specific device classes like SSD or NVMe -Define the application purpose of the pool, such as RBD or object storage -Perform day 2 operations like edits, scrubs, compression, and more -See how croit auto-creates best-practice pools for metadata during gateway setup For large-scale deployments with HDDs, we highly recommend using NVMe-backed metadata pools for services like CephFS and RGW—critical for scaling beyond 50,000+ objects or files.