Blog

How Snowflake Dynamic Tables Eliminate Pipeline Complexity

Opeyemi
Feb 9, 2026
8
min read

Welcome to the first post in our Snowflake Concepts series, where we break down powerful Snowflake features that solve real data engineering challenges.

If you've ever built data pipelines in Snowflake, you've likely dealt with this familiar workflow:
  • Write dbt models or SQL transformations
  • Set up orchestration (Airflow, Prefect, dbt Cloud)
  • Manage dependencies between tables
  • Schedule refreshes and handle failures
  • Rinse and repeat

And if you don't have an external orchestration tool in place, you're left using Snowflake's native features: Tasks and Streams, which require you to manually define execution order, handle errors, and manage dependencies yourself.

What if there was a simpler way? What if you could declare what your data should look like, and Snowflake automatically kept it up to date?

That's exactly what Dynamic Tables do.

In this post, we'll explore how Dynamic Tables work, when to use them, and how they can dramatically simplify your data pipelines. This is the first in a series where we'll cover Snowflake concepts like SCD Type 2 implementations, Tasks & Streams, and building production-grade pipelines.

What Are Dynamic Tables?

Dynamic Tables are specialised tables that automatically refresh based on a defined query and freshness target. Instead of setting up tasks and schedules to update data, you simply define what the final table should look like. Snowflake handles the rest—managing refresh schedules, detecting changes, and updating your data automatically.

Think of them as materialised views with superpowers: they store query results for faster performance, but can handle complex transformations, joins across multiple tables, and intelligent incremental updates.

The Core Concept

When you create a Dynamic Table, you write a SQL query that defines how data should be transformed. Snowflake then:

  • Manages the refresh schedule automatically based on your freshness requirements
  • Detects changes in source tables and updates only what's new or modified
  • Handles dependencies between tables without explicit orchestration
  • Switches between batch and streaming modes without code changes
  • Provides built-in monitoring so you can see refresh status and performance

No orchestration tools. No complex DAGs. Just declare what you want, and Snowflake keeps it up to date.

Current Challenges vs. Dynamic Tables Benefits

Building data pipelines typically involves juggling many moving parts: extracting data from multiple sources, transforming it, managing dependencies, coordinating execution order, and handling failures when things inevitably break. Dynamic Tables cut through this complexity with three core benefits:

1. Simplified Pipelines

  • Build streaming and batch pipelines with simple SQL—no orchestration tools needed, even for complex transformations with joins and aggregations.

2. Reduced Development Time

  • Focusing on defining queries, not complex data pipelines, which lead to spending less time managing data pipelines and more time doing the actual data work on insight generation from analyzing the data.

3. Cost Effective

  • Snowflake automatically analyses your queries and recommends the best refresh strategy (full vs. incremental). You only pay for what actually changes, not full table rebuilds.

4. Native to Snowflake

  • Everything runs inside Snowflake with the same security, governance, and scalability you already rely on. No external tools to manage or maintain.

How Dynamic Tables Work

Dynamic Tables operate on four key principles:

1. Define Your Query

Create the DT by writing a SQL query to transform the data you want

2. Automated Refresh

Snowflake runs this query on a schedule, automatically updating the Dynamic Table with any changes from the source tables

3. Efficient Updates

Instead of rebuilding the entire table, Snowflake detects changes and merges only new or modified data

4. Set Data Freshness (Lag)

You decide how up-to-date the data should be:

  • 5-minute lag for near-real-time updates
  • 1-hour lag for cost savings with less frequent updates

Simple Declarative Interface

Key components:

  • Stores results automatically
  • Automatically refreshes based on your target lag
  • Any query can be used (joins, aggregations, window functions, etc.)
Understanding REFRESH_MODE

Dynamic Tables offer three refresh modes: auto

Uses an intelligent heuristic to select between full and incremental refresh based on query complexity: full

Creates a full refresh of the Dynamic Table, even if it can be incrementally refreshed: incremental

Creates an incremental refresh Dynamic Table. If the query doesn't support incremental refresh, creation fails.

Dynamic Tables in Action: A Practical Example

Let's see how Dynamic Tables work with a real-world scenario. Imagine you're building user analytics for a SaaS product. You have clickstream data flowing in, and you need to track user engagement, daily activity, and conversion metrics.

The Traditional Approach

Without Dynamic Tables, you'd need to:

  • Write dbt models or SQL scripts for each transformation
  • Set up an orchestration tool (Airflow, Prefect) to schedule these transformations
  • Manually define dependencies (daily activity → engagement metrics → conversion funnel)
  • Handle incremental logic yourself
  • Monitor and debug failures across multiple systems

The Dynamic Tables Approach

With Dynamic Tables, you declare what you want, and Snowflake handles the rest.

Step 1: Set up your base tables

First, let's create some sample data to work with:

Step 2: Create your first Dynamic Table

Now let's create a Dynamic Table that summarises daily user activity:


Step 3: Create a downstream Dynamic Table

Finally, let's create a conversion funnel that depends on our daily activity table. Notice the TARGET_LAG = DOWNSTREAM parameter in the user_daily_activity dynamic table, this tells Snowflake to refresh that table automatically when this table daily_conversion_funnel needs an update. In summary, the user_daily_activity schedule is inferred from its downstream. Refer to this doc for more insight into target lag


That's it. No orchestration. No scheduling logic. Just define the transformation, and Snowflake keeps it updated within 5min minutes of any changes to user_events.


Step 4: Build on top with dependent Dynamic Tables

Now let's create a more complex Dynamic Table that joins user data with even


Step 5: Query your Dynamic Tables

Step 6: Let’s see automatic updates in action

user_events before new data

user_daily_activity before new data

user_engagement_metrics before

daily_conversion_funnel before

Now here's where Dynamic Tables shine.

Let's add new events:

user_events table after new record


Wait for the target lag period to pass (or manually refresh), then query again. Below is the DAG from the graph view.


user_daily_activity after **new data (**user_id 105 automatically added after set target-lag)

user_engagement_metrics after

daily_conversion_funnel after

Your conversion funnel automatically reflects the new data—no DAGs to trigger, no manual refreshes, no orchestration debugging.

What We Just Built

In less than 50 lines of SQL, we created:

✅ A three-tier data pipeline with proper dependencies

✅ Automatic incremental updates

✅ Multiple aggregation layers (daily activity → engagement → conversion)

✅ Cross-table joins without manual orchestration

With traditional orchestration, this would require:

  • Airflow DAGs or multiple tasks/streams to schedule and manage execution
  • Custom incremental logic in each model
  • Debugging across multiple systems when things break

Dynamic Tables handle all of this automatically.

When to Use Dynamic Tables

Dynamic Tables are ideal for scenarios where:

  • You want to avoid manually tracking data dependencies and managing refresh schedules
  • You need to materialize query results from multiple base tables without writing custom code
  • You need to build multiple tables for data transformations in a pipeline
  • You don't need fine-grained control over refresh schedules, and you only need to specify how fresh the data should be
  • You want to use a Dynamic Table as the source of a stream (for downstream processing)

When NOT to Use Dynamic Tables

Dynamic Tables may not be the best fit when you need:

  • Fine-grained refresh control: Control over refresh timing (specific times of day)—use Tasks instead
  • Complex transformations: Stored procedures or non-deterministic functions—use Tasks/Streams
  • Complete pipeline control: Full control over task executions and status tracking (Stream/Tasks would be better)
  • External data sources: Don't support external tables/streams as sources
  • Complex transformations - Very complex multi-step logic may be better suited to Tasks/Streams

Very complex multi-step logic may be better suited to Tasks/Streams

Dynamic Tables vs. Other Snowflake Objects

DT vs. Materialised Views

Materialised Views:

  • Based on top of a single table (SELECT *)
  • Always up-to-date (no lag)
  • Accelerates precomputed aggregate queries on a single table

Dynamic Tables:

  • Enable joins and complex queries across multiple tables
  • Up-to-date based on the scheduled lag you define
  • Better for multi-table transformation pipelines
  • DT vs. Streams/Tasks

Streams and Tasks:

  • Offer more control over execution
  • Better for complex, multi-step pipelines
  • Requires more setup and maintenance

Dynamic Tables:

  • Simpler, declarative approach
  • Less control, but much easier to use
  • Better for straightforward transformation pipelines

Understanding Target Lag

Target Lag is how you control data freshness in Dynamic Tables. The DT refresh is triggered by the data's set target_lag, which determines the latency of the data.

The parameter can either be:

  • A fixed time (e.g., TARGET_LAG = '1 hour')
  • Set to depend on downstream models with the DOWNSTREAM parameter

This makes refresh timing depend on other Dynamic Tables that depend on it. If you have an upstream DT A and another downstream DT B that depends on it, table B would only refresh when DT A is updated.

Best Practices

Lag Configuration
  • Choose lag based on business requirements
  • Use lag=downstream for intermediate Dynamic Tables to cascade refreshes efficiently

Warehouses

  • Share warehouse between related Dynamic Tables for isolation and cost visibility
  • Avoid mixing interactive workloads with Dynamic Table workloads to prevent compute contention

Monitoring

  • Use SHOW DYNAMIC TABLES and INFORMATION_SCHEMA to monitor refresh status
  • Set up alerts for failed refreshes

Use Cases

Dynamic Tables excel in several scenarios:

  • Business Intelligence (BI) - Building aggregated tables for dashboards that need to stay fresh without manual refreshes
  • Streaming Analytics - Processing streaming data with automatic, continuous updates
  • Data Sharing & Collaboration - Keeping shared datasets up-to-date for downstream consumers
  • Change Data Capture - Dynamic Tables simplify CDC by automatically capturing and processing data changes as they happen, eliminating complex manual processes. This ensures downstream systems always work with the most current data for real-time analytics, auditing, and replication—without building custom change-tracking logic.

What's Next

Dynamic Tables are powerful for simple, declarative transformations, but what about more complex use cases, such as tracking historical changes? In the next post in this series, we'll explore implementing SCD Type 2 (Slowly Changing Dimensions) with Dynamic Tables—showing you how to maintain historical snapshots of your data without complex orchestration logic. Stay tuned, and in the meantime, try building your first Dynamic Table and see how it compares to your existing pipeline workflows.

Resources

Share this post