У нас вы можете посмотреть бесплатно Snowflake Dynamic Table Mastery или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Mastering Snowflake Dynamic Tables: Automated Pipelines & Performance Optimization Are you tired of managing brittle data pipelines with complex streams, tasks, manual merges, and cron schedules? In this video, we explore the shift from the old "imperative" way of data engineering to the new "declarative" approach using Snowflake Dynamic Tables. Learn how to "set it and forget it" by simply defining your desired result and letting the platform manage the orchestration. We break down how the Automated Refresh Process uses incremental processing to efficiently detect changes and only process the deltas. You will also learn how to balance your compute costs and data freshness by adjusting the TARGET_LAG knob. To show you how this works in practice, we dive into a Performance Lab focused on Slowly Changing Dimensions (SCD Type 1). We compare a suboptimal INNER JOIN approach against an optimized pattern using the QUALIFY and RANK() window functions. You will see exactly how the optimized query leverages partition pruning to isolate changes, dropping refresh times from 2.8 seconds to just 804 milliseconds! ⏱️ Chapters / Timestamps: • 0:00 - The Shift to Declarative Pipelines: Say goodbye to high-maintenance infrastructure. • 0:45 - Architecture & Incremental Processing: How dynamic tables process only what has changed. • 1:30 - Controlling Freshness: Understanding the TARGET_LAG parameter. • 2:15 - Ideal Use Cases: SCDs, pipeline chaining, and transitioning from batch to streaming. • 3:00 - Performance Lab Setup: Simulating a live price table with 100 million rows. • 3:50 - Suboptimal vs. Optimized SQL: Why you should use QUALIFY instead of INNER JOIN. • 4:40 - The Performance Showdown: Seeing partition pruning and data locality in action. • 5:30 - The Broader Landscape: How this compares to Databricks Delta Live Tables and BigQuery Materialized Views. 💡 Key Takeaways: • The Golden Rule: Use QUALIFY for SCDs and avoid self-joins to enable efficient partition pruning. • The Metric to Watch: Always monitor "Partitions Scanned" in your refresh history to verify your data locality is working—it's the canary in the coal mine for optimization. • The Future is Declarative: The data industry is collectively moving away from manual scripts to automated SQL pipelines.