@AaronK11
I am going to assume that when you connect to your dataverse table from the Power Query editor, you have something similar to my screenshots:
Let's call this screenshot "S0". S0 is shown in the GUI when clicking on the step named "Source" in the Applied Steps list on the right of the Power Query (PQ) editor window. Each such step is a line of code in the functional programming language called M. A collection or set of such step, or a M script, is what in PW is referred to as a query. To see the whole query, right-click on a query in the left pane of the window titled "Queries" and select the Advance Editor option in the menu that pops up.
Now let's go back to the screenshots. What I am showing you here is a table from a data source that is a database (DB) (in this case, I am using SQL Server). This is a relational DB, meaning the tables in the DB are linked by some kind of relations.
In case the table in S0, the table links to 2 other tables in the DB using what's called a foreign key. So when you connect Power Query to this table and click the Source step, it will show you the foreign keys as columns of type table as shown below in S0.
If you click on the icon with the 2 outward facing arrows, right of the column name, the UI shows you the column names that will be imported if you click OK. In this case, the column names from table DATA_FAST.

Here are the first 3 lines of the M query:
let
Source = Sql.Database("DB_SERVER_NAME", "DB_NAME"),
dbo_FileImport = Source{[Schema="dbo",Item="TABLE_NAME"]}[Data],
#"Removed Columns" = Table.RemoveColumns(dbo_FileImport,{"DATA_SLOW", "DATA_FAST"}), <--tables that TABLE_NAME points to with a foreign key
With that out of the way, you have 2 choices:
1. Bring all your tables in by using Power Query, which is called Dataflow Gen2 (DFg2) in Fabric (if you create a Dataflow Gen2 artifact, you will be presented with the same GUI you see after clicking the "Transform data" button in Power BI Desktop), remove those columns that just represent links or relationships to other tables and keep only the data columns, and finally produce one new, single table; repeat this process for all the table you want to ingest into your lakehouse, or merge some table or all tables depending on what makes sense for your use case;
OR
2. If you do not need to perform any transformations (the "T" in ETL/ELT acronym) or if you do not want to perform any transformations using the Power Query M language, but rather in T-SQL, then simply use a "Copy data" activity with the appropriate SQL script in the Source tab of the activity (a script might be needed if you do not want to ingest those columns used for the DB relations that hold foreign keys).
FYI, doing everything with a pipeline is cheaper in terms of CUs consumed, than doing everything in a DFg2.
I am going to assume that you want to ingest the data from your tables to perform some sort of analytic workload. If so, then you do not need to maintain the relations between your table like in a regular relational database. So you will denormalize your DB with this process (assuming the source DB/dataverse was a normalized OLTP DB) and thus ready the data to be processed by whatever analytic engine you fancy using downstream (Power BI (SSAS on the inside), or Spark, Databricks, etc).
If some of these terms make no sense, please ask ChatGPT or Copilot to explain them to you.
And if all this helps, please give a thumbs-up and Accept as solution.