top of page

Capture Fabric Tenant Audit Logs in Minutes — The Simplest Way Yet

  • Writer: Ishan Deshpande
    Ishan Deshpande
  • Aug 30
  • 4 min read
ree

Introduction


When working with Microsoft Fabric, one of the key responsibilities for administrators is to track tenant-level activities—who did what, when, and where. These audit logs are critical for governance, compliance, and security monitoring.


Traditionally, capturing these logs required using the Power BI Admin API or building custom scripts, which could be a bit complex. Recently, I explored Semantic Link Labs, and I found an incredibly simple way to get these logs into a Lakehouse using just a single function:

👉 sempy_labs.admin.list_activity_events


In this blog, we’ll walk through:

  • What Semantic Link Labs is and what kind of insights it provides.

  • The prerequisites before you start.

  • A step-by-step demo capturing logs in a Lakehouse.

  • Detailed coverage of operations you can monitor.

  • Final thoughts



What is Semantic Link Labs?


Semantic Link Labs is an experimental package that extends Semantic Link with additional features for developers, data engineers, and administrators.

With the sempy_labs.admin module, you can:

  • Query activity events from your Fabric tenant.

  • Get list of reports, workspaces, datasets & domains in your tenant.

  • Get users having access to reports/dataset/workspaces.

  • And much more

 

Essentially, it saves you from writing boilerplate REST API calls, handling authentication, or managing pagination.


Prerequisites

Before you begin, make sure:

  1. You have Fabric Admin Rights (you must be a Fabric admin to access tenant activity events).

  2. You’ve installed Semantic Link Labs in your Fabric notebook or create a custom environment, include this library in it and use that in the notebook.

  3. Lakehouse is created and attached to your notebook.

 


Demo


Create a notebook (I’ll attach link to this notebook in the resource section at the end of this blog)


ree

Step 1: Import/Install necessary libraries


Here I have used a custom environment, then imported the necessary libraries.


Note – If you have not created a custom environment you will need to install the library in this notebook session.

%pip install semantic-link-labs

Step 2: Set spark config


Next, we are setting some spark config, this will allow us to use space in column names, if we use the default config, we will need to replace spaces with underscore character.


ree

Step 3: Fetch logs and save them into a Delta table


This code snippet retrieves the audit logs for a given day, creates a table, and writes the data into it. Each log entry includes details such as what operation was performed, when it occurred, who performed it, and on which object.


The function list_activity_events requires two parameters: start datetime and end datetime. However, there’s an important limitation — it can only fetch logs for one day at a time. To work around this, we can call the function in a loop to pull data for the past 25–30 days or any desired range.


In this notebook, I’ve included:

  • Code to load activity logs for the last 25 days.

  • A separate example to fetch yesterday’s data.


With this setup, you can easily schedule the notebook to run daily. Over time, this allows you to build a historical log store covering 60, 90 days, or even longer — depending on your retention needs.


Note: Don’t forget to update the code with your table’s ABFSS path and when you run the notebook make sure you are not overlapping any days, otherwise you will have duplicate data (truncate table if required).



What Operations Are Captured?


The tenant audit logs are essentially a detailed activity feed of everything happening inside your Microsoft Fabric environment. They cover who did what, when, and where, which is essential for governance, compliance, troubleshooting, and monitoring adoption.


When you call list_activity_events, you’ll typically see JSON objects with fields like:

  • Activity – the type of action performed (e.g., DatasetRefreshStart)

  • UserId / UserPrincipalName – who performed the action

  • ActivityTime – UTC timestamp

  • WorkspaceId / WorkspaceName – where the activity occurred

  • ArtifactId / ArtifactName – dataset, report, warehouse, etc.

  • ClientIP / Device – where the action originated


Here are the key categories of operations that are captured:


1. Dataset and Dataflow Operations

  • Dataset refresh started / completed / failed

  • Dataflow refresh started / completed

  • Credential updates on datasets/dataflows

  • Query execution events


👉 Useful for monitoring refresh failures, checking refresh frequency, and identifying performance bottlenecks.


2. Report and Dashboard Interactions

  • Report viewed, shared, or exported (PDF, PPTX, Excel)

  • Dashboard viewed, pinned, or shared

  • Subscription usage


👉 Great for understanding report adoption, top consumers, and potential overuse of export features.


3. Lakehouse, Warehouse, and SQL Analytics

  • Lakehouse table created, updated, deleted

  • Warehouse query executed

  • Warehouse object creation

  • SQL queries run via endpoints


👉 Essential for data engineers and admins to trace schema changes or investigate query load.


4. User & Workspace Management

  • Workspace created, deleted, renamed

  • User added to workspace, role changed (Admin, Member, Contributor, Viewer)

  • Group membership changes

  • Permissions assigned/revoked on artifacts


👉 Helps track governance and ensure correct access levels.


5. Administrative and Tenant-Level Operations

  • Capacity changes (e.g., premium settings, resource scaling)

  • Tenant settings updated (e.g., export restrictions, sharing policies)

  • Feature toggles turned on/off

  • Audit log configuration changes


👉 This is critical for security & compliance teams who need visibility into tenant-wide policy changes.


6. Other Notable Events

  • Export data operations (CSV, Excel downloads)

  • Sharing links created or deleted

  • API calls from service principals

  • Embed token generation (for external apps)


👉 These are often the most important for detecting unusual activity patterns or potential misuse.


💡 Tip: Since all these events land in your Lakehouse as structured data, you can build Power BI Reports to:

  • Track user activity trends

  • Monitor refresh failures over time

  • Audit workspace and role changes

  • Detect unusual patterns (like massive exports by a single user)

  • And much more

 

I’m working on couple of things around this and I’ll soon share some more blogs and solutions on this.

 


Conclusion


Capturing Fabric tenant audit logs doesn’t need to be complex. Thanks to Semantic Link Labs, with just one function (list_activity_events), you can seamlessly bring logs into a Lakehouse for auditing, monitoring, and further analytics.

If you’re a Fabric admin, I highly recommend giving this method a try. It’s by far the easiest way to stay on top of what’s happening in your tenant.



Resources and References


Demo Notebook


Semantic Link Labs Documentation



bottom of page