top of page

Time Travel in Delta Tables

  • Writer: Ishan Deshpande
    Ishan Deshpande
  • Apr 26
  • 2 min read

In today’s world of data engineering, it's not enough to store data — you must trace changes, recover past states, and support reproducible analytics. Microsoft Fabric Lakehouse provides exactly that with Time Travel on Delta Tables.

In this blog, we'll dive deep into how time travel works inside Microsoft Fabric's Lakehouse, why it’s important, and how you can use it in your day-to-day projects.


What is Time Travel in Fabric's Delta Tables?


In Microsoft Fabric, your Lakehouse stores data primarily in Delta format — an open-source storage layer that brings ACID transactions to big data.

Time Travel lets you query previous versions of a Delta table at any point using either a timestamp or a version number.


It’s like having a "rewind" button for your data:

  • Accidentally deleted rows? Rollback.

  • Need historical analysis? Query old versions.



Why Time Travel is Crucial in Fabric Lakehouse


  • Undo mistakes (delete/update/overwrite issues)

  • Pipeline debugging (process historical data snapshots)

  • Audit and compliance (meet legal or business data traceability requirements)

  • Machine Learning reproducibility (train models on past data versions)



How Time Travel Works in Fabric


Every time you perform a WRITE, UPDATE, DELETE, or MERGE operation, the Delta Table automatically creates a new version inside the Fabric Lakehouse.

Fabric stores:

  • Data files (Parquet format)

  • Transaction logs (_delta_log folder)

These logs maintain versioned snapshots which power time travel.

You don't need to configure anything special — it's built-in.



How to Query Historical Versions in Fabric


Fabric Lakehouse (via SQL Endpoint or Spark) lets you query past versions easily.

Using Version Number

SELECT * FROM 
delta.`Tables/your_table_path` 
VERSION AS OF 5;

Using Timestamp

SELECT * FROM 
delta.`Tables/your_table_path` 
TIMESTAMP AS OF '2025-04-25T12:00:00';

Note: Timestamp must be in UTC



Real-World Use Cases


  1. Restore Deleted Data

-- View version before delete 
SELECT * FROM delta.`/path/to/table` VERSION AS OF 10;
-- Overwrite current table with version 10 
CREATE OR REPLACE TABLE restored_table AS 
SELECT * FROM delta.`/path/to/table` VERSION AS OF 10;

  1. Reproduce a Failed Job

If a job failed yesterday and you want to reprocess it:

df = spark.read.format("delta").option("timestampAsOf", "2025-04-18").load("/path/to/table")

  1. Compare Table Versions

Compare record counts between versions:

SELECT COUNT(*) FROM delta.`/path/to/table` VERSION AS OF 20;
SELECT COUNT(*) FROM delta.`/path/to/table` VERSION AS OF 21;

  1. View Table History

You can always view the audit trail:

DESCRIBE HISTORY delta.`Tables/your_table_path`;

This will show:

  • Operation type (WRITE, UPDATE, DELETE)

  • User who made the change

  • Operation timestamp

  • Schema changes

  • Number of rows affected


Perfect for auditing and governance!


Retention and Vacuum in Fabric


  • Fabric automatically retains Delta history for a default period (usually 30 days).

  • You can use VACUUM to clean old data and reduce storage costs.

Example:

VACUUM delta.`Tables/your_table_path` RETAIN 168 HOURS;

Be cautious!

Once vacuumed, you cannot time travel to those older versions anymore.



Conclusion

Time Travel inside Microsoft Fabric's Lakehouse is a critical capability that empowers your organization to be:

  • Safer (with easy data recovery)

  • More compliant (full audit trails)

  • Highly reliable (reproducible pipelines)


And the best part?

Fabric makes it seamless and simple — you don’t need to configure anything special. Just build your Lakehouse, and time travel is there when you need it.

bottom of page