Top Insights You Can Unlock from Microsoft Fabric Audit Logs
- Ishan Deshpande

- Sep 10
- 5 min read
Updated: Oct 6

In my previous blog, I walked through the steps of how to access and extract Fabric audit logs. That was the foundation because raw logs, while useful, can quickly become overwhelming.
In this blog, we’ll take the next step: exploring the insights you can derive from those logs. This isn’t just about collecting data, it’s about turning it into actionable intelligence that helps optimize governance, usage, and costs.
🧱 Building the Right Foundation with Dimension Tables
Before we dive into insights, it’s important to prepare the logs for analysis. Audit events are transactional in nature, which makes them harder to analyse directly.
By creating dimension tables (e.g., for items, workspaces, reports, datasets), you can slice and dice the logs in a more structured way.
Getting this information is quite easy, we will be using the following functions from semantic-link-labs library -
labs.admin.list_workspaces() - Get list of workspaces in your tenant
labs.admin.list_reports() - Get list of reports in your tenant
labs.admin.list_datasets() - Get list of datasets in your tenant
labs.admin.list_items() - Get list of fabric Items in your tenant
These functions don't need any parameters, they will return a complete list of objects in your fabric tenant.
Here’s a sample notebook that you can use to get this data -
Note - In this notebook, I’ve captured all key Fabric dimension tables such as Workspace, Report, Items, Dataset etc, to maintain consistent metadata for analysis. Earlier versions of this notebook overwrote these dimension tables with each run, which caused loss of deleted object references. The updated version now preserves historical integrity by introducing two new columns — IsDeleted (flag) and DeletedOn (timestamp). This ensures that even deleted objects remain traceable in the dimension tables, maintaining referential consistency with the fact_audit_logs table.
To get complete details on the prerequisites and how these functions work, please check my previous blog.
📊 Top 5 Insights from Fabric Audit Logs
1. Unused Reports
Not every report earns its keep. By identifying reports with no ViewReport activity in the last 90 days, you can quickly spot reports that are no longer serving a purpose. This helps reduce clutter, simplify governance, and even cut down on unnecessary dataset refreshes.
SQL code -
SELECT
r.[Report Name],
COUNT(l.Id) AS ViewCount
FROM [LH_Monitoring].[dbo].[dim_reports] r
LEFT JOIN [LH_Monitoring].[dbo].[dim_workspaces] w
ON r.[Workspace Id] = w.[Id]
LEFT JOIN [LH_Monitoring].[dbo].[fact_audit_logs] l
ON r.[Report Id] = l.[Report Id]
WHERE w.Type = 'Workspace'
AND l.Operation = 'ViewReport'
AND l.[Creation Time] > (GETDATE() - 90)
AND r.[IsDeleted] = 0
GROUP BY r.[Report Name]
HAVING COUNT(l.Id) = 0;You can change this based on your business requirements, for example, get reports that are not viewed in last 60 days.
2. Top Users
Some users are super consumers, constantly interacting with reports. By tracking which users have the most report views, you can identify power users, target them for feedback, or even enlist them as champions for adoption across the organization.
SQL code –
SELECT
[User Id],
COUNT(*) AS Report_Views,
MAX([Creation Time]) AS Last_Activity
FROM [LH_Monitoring].[dbo].[fact_audit_logs]
WHERE Operation = 'ViewReport'
GROUP BY [User Id]
ORDER BY COUNT(*) DESC;3. Workspace Engagement
Not all workspaces are equal. Some are buzzing with activity, while others are effectively “dead.” By measuring activity levels at the workspace level, you can understand which areas of the business are actively using Fabric and which might need training, cleanup, or consolidation.
SQL code –
SELECT
w.Name AS Workspace,
COUNT(l.Id) AS Activity_Count
FROM [LH_Monitoring].[dbo].[dim_workspaces] w
LEFT JOIN [LH_Monitoring].[dbo].[fact_audit_logs] l
ON w.Id = l.[Workspace Id]
WHERE w.Type = 'Workspace' AND w.[IsDeleted] = 0
GROUP BY w.Name
ORDER BY COUNT(l.Id) ASC;Here we have specified Type = ‘ Workspace’ so this it will exclude all Personal and Admin workspaces. You can update the order by clause to DESC, to get most active workspaces.
4. Tracking Report Downloads
This is particularly important for governance, because with downloaded content there is a risk that it can be shared outside your tenant. Audit log also tracks events like ShareReport, ShareDashboard, and ShareDataset. Monitoring these events helps reduce security risks and ensures compliance.
SQl code -
SELECT
*
FROM [LH_Monitoring].[dbo].[fact_audit_logs]
WHERE Operation = 'DownloadReport';5. Unnecessary Dataset Refresh
Refreshing a dataset consumes compute resources (and money). But what if the reports connected to those datasets haven’t been viewed in months? That’s wasted effort. By correlating RefreshDataset events with report activity, you can pinpoint refreshes that deliver no value and optimize your capacity usage.
SQL code -
WITH report_views AS
(
SELECT
r.[Report Name],
r.[Dataset Id],
COUNT(l.Id) AS ViewCount
FROM dim_reports r
LEFT JOIN fact_audit_log_v2 l
ON r.[Report Id] = l.[Report Id]
WHERE l.Operation = 'ViewReport'
AND l.[Creation Time] > (GETDATE() - 60)
AND r.[IsDeleted] = 0
GROUP BY r.[Report Name], r.[Dataset Id]
),
dataset_refresh AS
(
SELECT
d.[Dataset Id],
d.[Dataset Name],
COUNT(l.Id) AS RefreshCount
FROM dim_datasets d
LEFT JOIN fact_audit_log_v2 l
ON d.[Dataset Id] = l.[Dataset Id]
WHERE l.Operation = 'RefreshDataset'
AND l.[Creation Time] > (GETDATE() - 60)
GROUP BY d.[Dataset Id], d.[Dataset Name]
)
SELECT
*
FROM report_views r
LEFT JOIN dataset_refresh d
ON r.[Dataset Id] = d.[Dataset Id]
WHERE d.RefreshCount > 10
AND r.ViewCount = 0;Here I am checking reports that are not been viewed in last 60 days, but their datasets are being refreshed more that 10 times. You can update this logic based on your business need.
Additional Ideas
Access Right Changes — Monitor who is modifying access to sensitive content (Operations - UpdateArtifact, ListDataAccessRoles )
Feature Adoption — Track operations like CopilotInteraction, RequestCopilot, StartRunNotebook, ListLakehouseTables to measure how much new Fabric features are being used.
Failed Refreshes — Datasets with repeated RefreshDataset failures need to be addressed on priority
Subscription Reliance — Which reports rely heavily on RunEmailSubscription instead of interactive usage.
With the SQL endpoint of your Lakehouse, you can directly query the audit logs using standard SQL. If you’ve followed along with my earlier notebooks, you can plug in the SQL samples mentioned above and start uncovering insights right away, no extra setup required.
Another great way to analyse this data is creating a Power BI report. I am working on this part as well and once that's done, I’ll share it with you all.
✅ Conclusion
Audit logs are more than just a compliance checkbox, they’re a goldmine of insights into usage, adoption, and governance. By setting up dimension tables and analysing key patterns, you can surface opportunities to:
Retire unused content,
Celebrate your top users,
Spot inactive workspaces,
Monitor external sharing, and
Cut down on unnecessary refresh costs.
That’s all for this blog! 🚀 If you found these insights useful, consider sharing this post with your network. I truly believe these practices can add significant value for organizations leveraging Microsoft Fabric and I’m excited to see how you use them to drive smarter, cleaner, and more cost-effective analytics.


