Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Microsoft is giving away 50,000 FREE Microsoft Certification exam vouchers. Get Fabric certified for FREE! Learn more

Reply
AaronK11
Regular Visitor

Pipleline and Dataverse not copying my columns

I am currently using a pipeline to connect and bring in all of the tables i want from dataverse into my datalake. I method of copying from the data assistant, then connecting to dataverse, using the environment domain and the connecting. Once connected i choose tables and select all of the tables i want. However this is where the issue is. When I choose to preview the tables, any tables that i would normally expand inside of power bi are just blank columns with no data in. When working in power bi i expand some of these and bring in other columns. I dont have the option to do that here. I know i wouldnt necessarily expand them here but you get the point. Once ive done this i saved the files in parquet format and load them to my bronze folder. After which i use a dataflow to connect and try to transform my data only to see i cant expand any columns as they are all blank and some data is missing from certain columns which i can see if i connect directly in power bi. If anyone could help it would be awsome.

1 ACCEPTED SOLUTION

@AaronK11 

I am going to assume that when you connect to your dataverse table from the Power Query editor, you have something similar to my screenshots:

 

Let's call this screenshot "S0".  S0 is shown in the GUI when clicking on the step named "Source" in the Applied Steps list on the right of the Power Query (PQ) editor window.  Each such step is a line of code in the functional programming language called M.  A collection or set of such step, or a M script, is what in PW is referred to as a query.  To see the whole query, right-click on a query in the left pane of the window titled "Queries" and select the Advance Editor option in the menu that pops up. 

 

Now let's go back to the screenshots.  What I am showing you here is a table from a data source that is a database (DB) (in this case, I am using SQL Server).  This is a relational DB, meaning the tables in the DB are linked by some kind of relations.  

 

In case the table in S0, the table links to 2 other tables in the DB using what's called a foreign key.  So when you connect Power Query to this table and click the Source step, it will show you the foreign keys as columns of type table as shown below in S0.Screenshot 2025-04-15 121358.jpg

 

If you click on the icon with the 2 outward facing arrows, right of the column name, the UI shows you the column names that will be imported if you click OK. In this case, the column names from table DATA_FAST.

Screenshot 2025-04-15 121431.jpg

 

Here are the first 3 lines of the M query:

let
    Source = Sql.Database("DB_SERVER_NAME", "DB_NAME"), 
    dbo_FileImport = Source{[Schema="dbo",Item="TABLE_NAME"]}[Data], 
    #"Removed Columns" = Table.RemoveColumns(dbo_FileImport,{"DATA_SLOW", "DATA_FAST"}), <--tables that TABLE_NAME points to with a foreign key

 

With that out of the way, you have 2 choices:

 

1. Bring all your tables in by using Power Query, which is called Dataflow Gen2 (DFg2) in Fabric (if you create a Dataflow Gen2 artifact, you will be presented with the same GUI you see after clicking the "Transform data" button in Power BI Desktop), remove those columns that just represent links or relationships to other tables and keep only the data columns, and finally produce one new, single table; repeat this process for all the table you want to ingest into your lakehouse, or merge some table or all tables depending on what makes sense for your use case; 

 

OR

 

2. If you do not need to perform any transformations (the "T" in ETL/ELT acronym) or if you do not want to perform any transformations using the Power Query M language, but rather in T-SQL, then simply use a "Copy data" activity with the appropriate SQL script in the Source tab of the activity (a script might be needed if you do not want to ingest those columns used for the DB relations that hold foreign keys).

 

FYI, doing everything with a pipeline is cheaper in terms of CUs consumed, than doing everything in a DFg2.  

 

I am going to assume that you want to ingest the data from your tables to perform some sort of analytic workload.  If so, then you do not need to maintain the relations between your table like in a regular relational database. So you will denormalize your DB with this process (assuming the source DB/dataverse was a normalized OLTP DB) and thus ready the data to be processed by whatever analytic engine you fancy using downstream (Power BI (SSAS on the inside), or Spark, Databricks, etc).

 

If some of these terms make no sense, please ask ChatGPT or Copilot to explain them to you.

 

And if all this helps, please give a thumbs-up and Accept as solution. 

View solution in original post

10 REPLIES 10
AaronK11
Regular Visitor

Hi thanks for the reply. 

 

Where you have mentioned to "make sure the complex columns are included in the schema", when i initiate the first copy data and and modifying the columns as im setting this up, those complex columns dont even appear on this page, like the pipeline activity doesnt recognise them as columns to begin with. Im not sure how i can manually add them in here.

I was applying the flatten type to the pipeline as i was copying over to my lakehouse but of course the columns arent even there in that instance as they havent been brought over from the origional columns in dataverse.

 

Is there a certain way i have to map them into the mapping table, like use of a certain code or function. 

 

Any help would be great

@AaronK11 did you try the Mapping tab and click on Import schema?

Okay ive found the +new mapping field and when i click onto it it looks like i can add the columns im after. Im presume the source column is the column im after and then the destination is the column i want to expand inside of it. What happens if i want to select multiple columns inside of a lookup column. How can i go about this ? or do i have to add the same column again to the source and re expand it ? 

 

Thanks for the help

@AaronK11 Ok bear with me as I am trying to decode what you are asking.

 

The source columns are the columns from your Dataverse table, that is, your data source.  I have never used Dataverse, but I imagine Dataverse is just another name for some DB engine and it exposes tables, like any table from a DB, for ex.  Therefore 1 column should just be 1 column with no column inside this column.

 

So when you write: "the column i want to expand inside of it," not sure what you mean.  What do you mean by "expand inside of"?

 

The general principle is that you map the source columns to the destination columns.  So the source columns should be exactly the same name and types that you have in your data source, and for the destination columns you can change the names if you want, but not change the data types.

 

And that's about it. Fairly straightforward and should work. Although there is or was (not sure if the dev team fixed that) a bug with source string types being converted to NVARCHAR(MAX) type, which then would trigger an error message that the underlying version of SQL Server is not compatible with this type.  

 

If this solves your issue, please click "Accept as solution" button. 

Hi, apologies its hard to understand exactly what each of us is saying over message. When i say the column that i want to expand im reffering to columns that appear like the following in power bi.

AaronK11_0-1744704580495.png

AaronK11_1-1744704646725.png

 

Now as you can see i have the option to expand this column and often i want to select more than one of the columns inside the origional column.

The mapping option inside the copy data function in the pipeline you mentioned has now allowed me to find this column as it didnt origionally pull through, but im asking is there any way i can use this same functionality that i get in power bi with this expansion of columns as i can when transforming the data i have in my lakehouse/pipeline. Thanks again

@AaronK11 

I am going to assume that when you connect to your dataverse table from the Power Query editor, you have something similar to my screenshots:

 

Let's call this screenshot "S0".  S0 is shown in the GUI when clicking on the step named "Source" in the Applied Steps list on the right of the Power Query (PQ) editor window.  Each such step is a line of code in the functional programming language called M.  A collection or set of such step, or a M script, is what in PW is referred to as a query.  To see the whole query, right-click on a query in the left pane of the window titled "Queries" and select the Advance Editor option in the menu that pops up. 

 

Now let's go back to the screenshots.  What I am showing you here is a table from a data source that is a database (DB) (in this case, I am using SQL Server).  This is a relational DB, meaning the tables in the DB are linked by some kind of relations.  

 

In case the table in S0, the table links to 2 other tables in the DB using what's called a foreign key.  So when you connect Power Query to this table and click the Source step, it will show you the foreign keys as columns of type table as shown below in S0.Screenshot 2025-04-15 121358.jpg

 

If you click on the icon with the 2 outward facing arrows, right of the column name, the UI shows you the column names that will be imported if you click OK. In this case, the column names from table DATA_FAST.

Screenshot 2025-04-15 121431.jpg

 

Here are the first 3 lines of the M query:

let
    Source = Sql.Database("DB_SERVER_NAME", "DB_NAME"), 
    dbo_FileImport = Source{[Schema="dbo",Item="TABLE_NAME"]}[Data], 
    #"Removed Columns" = Table.RemoveColumns(dbo_FileImport,{"DATA_SLOW", "DATA_FAST"}), <--tables that TABLE_NAME points to with a foreign key

 

With that out of the way, you have 2 choices:

 

1. Bring all your tables in by using Power Query, which is called Dataflow Gen2 (DFg2) in Fabric (if you create a Dataflow Gen2 artifact, you will be presented with the same GUI you see after clicking the "Transform data" button in Power BI Desktop), remove those columns that just represent links or relationships to other tables and keep only the data columns, and finally produce one new, single table; repeat this process for all the table you want to ingest into your lakehouse, or merge some table or all tables depending on what makes sense for your use case; 

 

OR

 

2. If you do not need to perform any transformations (the "T" in ETL/ELT acronym) or if you do not want to perform any transformations using the Power Query M language, but rather in T-SQL, then simply use a "Copy data" activity with the appropriate SQL script in the Source tab of the activity (a script might be needed if you do not want to ingest those columns used for the DB relations that hold foreign keys).

 

FYI, doing everything with a pipeline is cheaper in terms of CUs consumed, than doing everything in a DFg2.  

 

I am going to assume that you want to ingest the data from your tables to perform some sort of analytic workload.  If so, then you do not need to maintain the relations between your table like in a regular relational database. So you will denormalize your DB with this process (assuming the source DB/dataverse was a normalized OLTP DB) and thus ready the data to be processed by whatever analytic engine you fancy using downstream (Power BI (SSAS on the inside), or Spark, Databricks, etc).

 

If some of these terms make no sense, please ask ChatGPT or Copilot to explain them to you.

 

And if all this helps, please give a thumbs-up and Accept as solution. 

v-tsaipranay
Community Support
Community Support

Hi @AaronK11 ,

Thanks for reaching out to the Microsoft Fabric Community.

 

What you're experiencing is expected behavior when working with Dataverse complex fields such as lookup columns, choices, and navigation properties via pipelines or Dataflow Gen2. These fields are stored as nested objects or references, and unlike Power BI Desktop, pipelines do not automatically expand or flatten them.

Try this troubleshooting steps to resolve this:

  1. Identify complex columns (e.g., lookups, choices, or customer fields) in the Dataverse tables you're ingesting.
  2. In Dataflow Gen2, apply the Flatten transformation to extract values such as id, name, or value from those nested fields.
  3. Ensure your copy activity in the pipeline is correctly mapping the schema and including complex fields—not dropping them during ingestion.
  4. As an alternative, if you require a more schema-aware ingestion, consider using Dataverse Export to Data Lake, which retains metadata and relationships more reliably.

These steps should help you transform and enrich your data as expected in the downstream layers of your lakehouse.

 

I hope this will resolve your issue, if you need any further assistance, feel free to reach out.

If this post helps, then please give us Kudos and consider Accept it as a solution to help the other members find it more quickly.

Thankyou.

Hi there, i noticed you replied to my issue earlier regarding Pipleline and Dataverse not copying my columns and just wanted to further query a couple bits of it with you.

 

  1. Identify complex columns (e.g., lookups, choices, or customer fields) in the Dataverse tables you're ingesting. When you say identify, when im briging my columns into my report through the pipline, the columns that are normally expanded here in power bi dont appear so im not sure how i can ingest them.
  2. AaronK11_0-1744100671579.png

    i have applied the flatten as mentioned.

  3. I think my query does the copy as it should do.AaronK11_1-1744100762563.png

     

    Any further help on this matter would be greatly appreciated.

Hi @AaronK11 ,

Thank you for the follow-up and the additional screenshots.

 

This issue occurs because Dataverse complex fields (like lookups or choices) are stored as nested objects. Unlike Power BI, Fabric pipelines and Dataflow Gen2 do not auto-expand these fields unless explicitly configured.

  • In your pipeline, open the Copy data activity, go to the Mapping tab, and ensure that complex fields like ownerid, customerid, and other lookups are included in the schema. If they’re missing, you may need to manually map them.
  • After loading data into your Lakehouse: Open your Dataflow Gen2 and use the Flatten transformation on the complex fields and extract the nested values (e.g., .name, .value, .id) as needed.
  • If the fields are correctly mapped and you’ve already applied Flatten, but the data is still blank or incomplete, this could indicate an unexpected ingestion issue.

In that case, we recommend creating a support ticket through the official Fabric support portal for further investigation:

How to create a Fabric and Power BI Support ticket - Power BI | Microsoft Learn

 

If this post helps, then please give us Kudos and consider Accept it as a solution to help the other members find it more quickly.

Thankyou.

Element115
Super User
Super User

Bug.  Open a support ticket here:

 

Microsoft Fabric Support and Status | Microsoft Fabric

Helpful resources

Announcements
MarchFBCvideo - carousel

Fabric Monthly Update - March 2025

Check out the March 2025 Fabric update to learn about new features.

Notebook Gallery Carousel1

NEW! Community Notebooks Gallery

Explore and share Fabric Notebooks to boost Power BI insights in the new community notebooks gallery.

April2025 Carousel

Fabric Community Update - April 2025

Find out what's new and trending in the Fabric community.

"); $(".slidesjs-pagination" ).prependTo(".pagination_sec"); $(".slidesjs-pagination" ).append("
"); $(".slidesjs-play.slidesjs-navigation").appendTo(".playpause_sec"); $(".slidesjs-stop.slidesjs-navigation").appendTo(".playpause_sec"); $(".slidesjs-pagination" ).append(""); $(".slidesjs-pagination" ).append(""); } catch(e){ } /* End: This code is added by iTalent as part of iTrack COMPL-455 */ $(".slidesjs-previous.slidesjs-navigation").attr('tabindex', '0'); $(".slidesjs-next.slidesjs-navigation").attr('tabindex', '0'); /* start: This code is added by iTalent as part of iTrack 1859082 */ $('.slidesjs-play.slidesjs-navigation').attr('id','playtitle'); $('.slidesjs-stop.slidesjs-navigation').attr('id','stoptitle'); $('.slidesjs-play.slidesjs-navigation').attr('role','tab'); $('.slidesjs-stop.slidesjs-navigation').attr('role','tab'); $('.slidesjs-play.slidesjs-navigation').attr('aria-describedby','tip1'); $('.slidesjs-stop.slidesjs-navigation').attr('aria-describedby','tip2'); /* End: This code is added by iTalent as part of iTrack 1859082 */ }); $(document).ready(function() { if($("#slides .item").length < 2 ) { /* Fixing Single Slide click issue (commented following code)*/ // $(".item").css("left","0px"); $(".item.slidesjs-slide").attr('style', 'left:0px !important'); $(".slidesjs-stop.slidesjs-navigation").trigger('click'); $(".slidesjs-previous").css("display", "none"); $(".slidesjs-next").css("display", "none"); } var items_length = $(".item.slidesjs-slide").length; $(".slidesjs-pagination-item > button").attr("aria-setsize",items_length); $(".slidesjs-next, .slidesjs-pagination-item button").attr("tabindex","-1"); $(".slidesjs-pagination-item button").attr("role", "tab"); $(".slidesjs-previous").attr("tabindex","-1"); $(".slidesjs-next").attr("aria-hidden","true"); $(".slidesjs-previous").attr("aria-hidden","true"); $(".slidesjs-next").attr("aria-label","Next"); $(".slidesjs-previous").attr("aria-label","Previous"); //$(".slidesjs-stop.slidesjs-navigation").attr("role","button"); //$(".slidesjs-play.slidesjs-navigation").attr("role","button"); $(".slidesjs-pagination").attr("role","tablist").attr("aria-busy","true"); $("li.slidesjs-pagination-item").attr("role","list"); $(".item.slidesjs-slide").attr("tabindex","-1"); $(".item.slidesjs-slide").attr("aria-label","item"); /*$(".slidesjs-stop.slidesjs-navigation").on('click', function() { var itemNumber = parseInt($('.slidesjs-pagination-item > a.active').attr('data-slidesjs-item')); $($('.item.slidesjs-slide')[itemNumber]).find('.c-call-to-action').attr('tabindex', '0'); });*/ $(".slidesjs-stop.slidesjs-navigation, .slidesjs-pagination-item > button").on('click keydown', function() { $.each($('.item.slidesjs-slide'),function(i,el){ $(el).find('.c-call-to-action').attr('tabindex', '-1'); }); var itemNumber = parseInt($('.slidesjs-pagination-item > button.active').attr('data-slidesjs-item')); $($('.item.slidesjs-slide')[itemNumber]).find('.c-call-to-action').attr('tabindex', '0'); }); $(".slidesjs-play.slidesjs-navigation").on('click', function() { $.each($('.item.slidesjs-slide'),function(i,el){ $(el).find('.c-call-to-action').attr('tabindex', '-1'); }); }); $(".slidesjs-pagination-item button").keyup(function(e){ var keyCode = e.keyCode || e.which; if (keyCode == 9) { e.preventDefault(); $(".slidesjs-stop.slidesjs-navigation").trigger('click').blur(); $("button.active").focus(); } }); $(".slidesjs-play").on("click",function (event) { if (event.handleObj.type === "click") { $(".slidesjs-stop").focus(); } else if(event.handleObj.type === "keydown"){ if (event.which === 13 && $(event.target).hasClass("slidesjs-play")) { $(".slidesjs-stop").focus(); } } }); $(".slidesjs-stop").on("click",function (event) { if (event.handleObj.type === "click") { $(".slidesjs-play").focus(); } else if(event.handleObj.type === "keydown"){ if (event.which === 13 && $(event.target).hasClass("slidesjs-stop")) { $(".slidesjs-play").focus(); } } }); $(".slidesjs-pagination-item").keydown(function(e){ switch (e.which){ case 37: //left arrow key $(".slidesjs-previous.slidesjs-navigation").trigger('click'); e.preventDefault(); break; case 39: //right arrow key $(".slidesjs-next.slidesjs-navigation").trigger('click'); e.preventDefault(); break; default: return; } $(".slidesjs-pagination-item button.active").focus(); }); }); // Start This code is added by iTalent as part of iTrack 1859082 $(document).ready(function(){ $("#tip1").attr("aria-hidden","true").addClass("hidden"); $("#tip2").attr("aria-hidden","true").addClass("hidden"); $(".slidesjs-stop.slidesjs-navigation, .slidesjs-play.slidesjs-navigation").attr('title', ''); $("a#playtitle").focus(function(){ $("#tip1").attr("aria-hidden","false").removeClass("hidden"); }); $("a#playtitle").mouseover(function(){ $("#tip1").attr("aria-hidden","false").removeClass("hidden"); }); $("a#playtitle").blur(function(){ $("#tip1").attr("aria-hidden","true").addClass("hidden"); }); $("a#playtitle").mouseleave(function(){ $("#tip1").attr("aria-hidden","true").addClass("hidden"); }); $("a#play").keydown(function(ev){ if (ev.which ==27) { $("#tip1").attr("aria-hidden","true").addClass("hidden"); ev.preventDefault(); return false; } }); $("a#stoptitle").focus(function(){ $("#tip2").attr("aria-hidden","false").removeClass("hidden"); }); $("a#stoptitle").mouseover(function(){ $("#tip2").attr("aria-hidden","false").removeClass("hidden"); }); $("a#stoptitle").blur(function(){ $("#tip2").attr("aria-hidden","true").addClass("hidden"); }); $("a#stoptitle").mouseleave(function(){ $("#tip2").attr("aria-hidden","true").addClass("hidden"); }); $("a#stoptitle").keydown(function(ev){ if (ev.which ==27) { $("#tip2").attr("aria-hidden","true").addClass("hidden"); ev.preventDefault(); return false; } }); }); // End This code is added by iTalent as part of iTrack 1859082