Fabric Ideas just got better! New features, better search, and direct team engagement. Learn more
The great thing about Fabric is that we can share data between Lakehouse and Warehouse in one workspace. However, the utility is massively unlocked when we can share workload/processing/pipeline across the workspace too. This means the ability to run a bit of Spark notebook, then a stored proc, then another notebook, then stored proc again etc.
Usually a mixed pipeline would need a secondary tool like Airflow or Prefect. However, if we can execute T-SQL stored procs from the Notebook, then we can orchestrate a pipeline across lakehouse/warehouse from within Fabric itself. This will be massive.
Please add the ability to easily execute Warehouse T-SQL from Spark notebook. Presumably we can try to do this by setting up the Apache Spark connector but it would be quite complicated.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.