New Enhancements for Fabric Data Pipelines
- mandarp0
- Oct 11, 2024
- 3 min read
Updated: Sep 4, 2025
Microsoft Fabric’s Data Factory pipelines are known for their robust capabilities in building complex workflows and data activities. With the latest updates, Fabric continues to innovate by introducing exciting features based on customer feedback.
Invoke Remote Pipeline Activity
Invoke remote pipeline activity is now available in public preview. With this enhancement, you can seamlessly call ADF or Synapse pipelines directly within Fabric pipelines. Imagine being able to integrate Mapping Data Flows or SSIS pipelines into your Fabric workflow - this is now a reality. This opens up a wide range of use cases, such as:
Utilizing existing ADF or Synapse pipelines for processing heavy workloads.
Creating hybrid workflows that span across platforms.
Optimizing your existing investment in Azure tools by calling them inline from a Fabric data pipeline.
The previously invoked pipeline activity will not support the new capabilities of remote pipeline invocation or child pipeline monitoring. For the latest functionality, including child pipeline monitoring and the ability to call external pipelines, you’ll need to switch to the new Invoke Pipeline activity.

Functions Activity – Support for Fabric User Data Functions
Fabric pipelines have extended their support for Azure Functions, adding the ability to call Fabric User Data Functions. This feature is also in public preview, enabling users to inject their custom code into automated workflows for an unprecedented level of customization and control.
Fabric User Data Functions allow developers to create serverless, scalable custom code optimized for Fabric’s data platform. With this enhancement, the existing Azure Functions pipeline activity now supports Fabric User Data Functions, providing powerful new ways to handle data processing and transformations. Whether you’re building complex data engineering pipelines or automating small tasks, this new functionality allows for:
Seamless integration of custom code into automated pipelines.
Enhanced flexibility and control over how data is transformed and processed.
The ability to run serverless code in an optimized environment without worrying about infrastructure.

Spark Job Environment Parameters
One of the most popular use cases in Fabric Data Factory is automating Spark Notebook executions within data pipelines. A common challenge has been the cold-start delays of initiating new Spark sessions, which can slow down the overall workflow. Fabric has addressed this by introducing Session tags under “Advanced settings” in the Spark Notebook activity.
With Session tags, you can now reuse an existing Spark session, drastically reducing cold-start times and improving the overall efficiency of your data pipelines. By tagging your session and reusing it across activities, you avoid the time-consuming process of creating a new session for every notebook execution. This enhancement brings significant performance improvements, particularly for use cases where Spark jobs are executed frequently.

Conclusion
Microsoft Fabric’s latest updates continue to push the boundaries of what is possible with data pipelines. These new enhancements provide powerful tools for complex workflows, integrating external pipelines, adding custom code, and improving performance. Whether you are a seasoned data engineer or just getting started with Fabric, these features will help you take your data processing to the next level.
For expert data solutions tailored to your business, contact us at Numlytics. Transform your data into actionable insights!
Sign up for blog updates!
Join my email list to receive updates and information.
Sign up










Comments