Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Applies to: ✅ SQL database in Microsoft Fabric
Fabric Dataflow Gen2 allows you to transform data while moving it. This article is a quick orientation for using dataflows with SQL database in Fabric, one of several options to copy data from source to source. For comparison of options, see Microsoft Fabric decision guide: copy activity, dataflow, or Spark.
Fabric Dataflow Gen2 supports many configurations and options, including scheduling, this article's example is simplified to get you started with a simple data copy.
Prerequisites
- You need an existing Fabric capacity. If you don't, start a Fabric trial.
- Make sure that you Enable SQL database in Fabric using Admin Portal tenant settings.
- Create a new workspace or use an existing Fabric workspace.
- Create a new SQL database or use an existing SQL database.
- Load the AdventureWorks sample data in a new SQL database, or use your own sample data.
Get started with Fabric Dataflow Gen2
- Open your workspace. Select + New Item.
- Select Data Flow Gen 2 from the menu.
- Once the new Data Flow opens, select Get Data. You can also select the down arrow on Get data and then More.
- Pick your Fabric SQL database from the OneLake list. It will be the source of this Data Flow.
- Pick a table by checking the box next to it.
- Select Create.
- You now have most of your data flow configured. There are many different configurations you can do from here to get the movement of data setup to meet your needs.
- Next, we need to configure a destination. Select the plus button (+) next to Data destination.
- Select SQL database.
- Select Next. If Next isn't enabled, select Sign-in to reauthenticate.
- Select your destination target.
- Select Next.
- Review the settings and options available. Use Automatic Settings.
- Select Next. You now have a complete data flow.
- Select Publish. When the Data Flow publishes, it refreshes the data. That means as soon as the data flow is published you'll see it refreshing. Once it finishes refreshing, you'll see your new table in your database.