Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Fabric Ideas just got better! New features, better search, and direct team engagement. Learn more

Execute Warehouse T-SQL from Spark Notebooks

The great thing about Fabric is that we can share data between Lakehouse and Warehouse in one workspace. However, the utility is massively unlocked when we can share workload/processing/pipeline across the workspace too. This means the ability to run a bit of Spark notebook, then a stored proc, then another notebook, then stored proc again etc.


Usually a mixed pipeline would need a secondary tool like Airflow or Prefect. However, if we can execute T-SQL stored procs from the Notebook, then we can orchestrate a pipeline across lakehouse/warehouse from within Fabric itself. This will be massive.


Please add the ability to easily execute Warehouse T-SQL from Spark notebook. Presumably we can try to do this by setting up the Apache Spark connector but it would be quite complicated.


Status: Needs Votes
Comments
fbcideas_migusr
New Member

Up-voting! Also adding the ability to run TSQL Notebooks from PySpark Notebooks (Run-all Notebooks) to orchestrate workflows across Lakehouse & Warehouse! 🙂

fbcideas_migusr
New Member
Status changed to: Needs Votes