Passing Values Between Data Pipelines and Notebooks
- Ishan Deshpande
- Mar 15
- 2 min read
Updated: Mar 17

Microsoft Fabric provides a seamless integration between Data Pipelines and Notebooks, enabling a dynamic data processing workflow. A common requirement in data engineering is passing values from a Data Pipeline to a Notebook and retrieving results back for further processing. This blog will walk through different techniques to achieve this in Microsoft Fabric.
Passing Parameters from a Data Pipeline to a Notebook
Step 1: Define Parameters in the Notebook
This is very easy, you just need to declare your variables in a cell and toggle that cell to a parameter cell.
Note - For this method to work your notebooks default language should be PySpark.

In this example I have two variables, and their values will come from the Data Pipeline.
Step 2: Configure the Pipeline to Pass Parameters
Create a Data Pipeline in Microsoft Fabric.
Add a Notebook activity to the pipeline.
In the Settings tab of the Notebook activity, navigate to the Base Parameters section.
Define the parameter name and provide a value. To get the RunID and Pipeline Name we can use the following functions
RunID - @pipeline.RunId
Pipeline Name - @pipeline.PipelineName
Note - The parameter names should be the same what you have in the notebook.

Save and run the pipeline.
When the pipeline runs, the value is injected into the Notebook execution
Use cases
Logging & Debugging: Store execution details in log tables for troubleshooting errors, especially when managing multiple pipelines and Notebooks.
Conditional Execution: Use the parameter in a switch statement to execute different logic within the Notebook based on its value, which can be derived from a previous pipeline activity.
Passing Values from a Notebook Back to a Pipeline
To return values from a Notebook back to the Data Pipeline, we leverage mssparkutils.notebook.exit()

Here I have built a simple use case, if length of both variables is greater than 0 (so it has some value) then return 1 else return 0.
Step 1: Retrieve a Value in the Notebook
When you run this Notebook within a pipeline, this value is saved in the output of Notebook activity as a JSON.
Step 2: Capture Output in the Data Pipeline
To access this value in the next activity connected to the Notebook activity you can use following code –
@activity('Notebook_Activity_Name').output.result.exitValue

In this example I am accessing the output of Notebook activity in the IF activity and checking if its greater than 0 then set variable value to “success” else set it to “failure”.
Expression - @greater(int(activity('Notebook1').output.result.exitValue),0 )
Use case
Conditional Workflow Execution: Control the flow of execution by running different activities based on the Notebook's output. For instance, run Notebook2 if Notebook1 returns a value greater than 0 else run Notebook3.
Conclusion
Integrating Data Pipelines with Notebooks in Microsoft Fabric provides a flexible and efficient way to manage dynamic data workflows. By passing values between them, data engineers can:
Automate parameterized execution for seamless data transformation.
Improve workflow efficiency by making data-driven decisions within pipelines.
Enhance debugging and monitoring with structured logging.
Enable adaptive execution by controlling the flow of activities based on Notebook outputs.