Microsoft is giving away 50,000 FREE Microsoft Certification exam vouchers. Get Fabric certified for FREE! Learn more
Good day all,
First, know that I'm new to the Power BI space and am currently troubleshooting "not enough memory" errors that occur during a semantic model refresh.
I've been looking at this for over a week and wanted a sanity check from the community.
Issue: During scheduled refreshes of one of our production workspace models we always receive "not enough memory" errors.
Additional data:
So, why would the model refresh everywhere but on the PROD Power BI service?
Thanks in advance!
Paul
Solved! Go to Solution.
Thanks @Poojara_D12 , we found the resolution and it was along the lines of what you where indicating here. By default, PowerBI during data load will use a default parallelism of six, that is loading from six tables at once. Once we reduced this to two all our refresh issues went away! To access this setting navigate to:
1) Open the semantic model
2) Navigate to File -> Options and settings -> Options -> Data Load , scroll down to "Current File" section.
3) Select the setting you'd like to try. Screen shot attached.
Thanks everyone for your assistance in this matter.
Hi @paulgyroh
Given the context you've shared, it sounds like you're dealing with a classic case of capacity constraints on the Power BI Service, despite having a relatively efficient model. The key indicator is that the semantic model refreshes perfectly fine in Power BI Desktop and in the Stage workspace, but consistently fails with "not enough memory" errors in the PROD workspace, even though it has a smaller data source overall. Since you're using an A2 SKU, which has limited memory and CPU resources compared to Premium P SKUs, the likely culprit is capacity pressure during the scheduled refresh windows in PROD. In A SKUs, memory and resource allocation are shared across all tenants hosted on that Azure capacity (since it's an embedded SKU), making your refreshes susceptible to resource contention, especially during peak times.
The Stage environment may be experiencing less concurrent activity or refresh demand, which is why it works there. Also, while the model size difference between Stage and PROD is only about 90 MB, that can make a big impact when you're near the memory ceiling—especially if background processes like loading partitions, calculating measures, or finalizing the refresh require overhead memory that pushes usage beyond the capacity limit. To mitigate this, try rescheduling your PROD refreshes to less busy times, perhaps outside typical peak windows, or evaluate if any other models or dataflows are refreshing at the same time and causing collisions. If the model is still under 1 GB and using incremental refresh, you could also try optimizing partition size further or reducing calculated columns/measures during refresh. Ultimately, if these mitigations don't stabilize things, consider moving to a Premium P SKU or increasing the capacity tier (e.g., A3) to give your workspace more breathing room.
Thanks @Poojara_D12 , we found the resolution and it was along the lines of what you where indicating here. By default, PowerBI during data load will use a default parallelism of six, that is loading from six tables at once. Once we reduced this to two all our refresh issues went away! To access this setting navigate to:
1) Open the semantic model
2) Navigate to File -> Options and settings -> Options -> Data Load , scroll down to "Current File" section.
3) Select the setting you'd like to try. Screen shot attached.
Thanks everyone for your assistance in this matter.
Please note that this setting only applies to the Desktop, it is ignored in the service. If you want to serialize queries, use Function.InvokeAfter()
I thought that too upon first reading of the documentation but after re-reading it and confirming that this setting changes the model itself, plus it resolving our issue the "Current File" setting is honored by Service. "Global" indeed is not honored by the Service. Know that before we changed this setting 100% of the daily refreshes would fail, since changing 100% have succeeded.
ah so maybe they fixed that since the last time I attempted to use the de-parallelization. I'll check it again.
Hi @paulgyroh,
Just following up to see if the solution provided was helpful in resolving your issue. Please feel free to let us know if you need any further assistance.
If the response addressed your query, kindly mark it as Accepted Solution and click Yes if you found it helpful — this will benefit others in the community as well.
Best regards,
Prasanna Kumar
Hi @paulgyroh,
Check if query folding works in PROD to confirm whether transformations are pushed to the source or processed in memory.
Monitor gateway performance to identify potential refresh failures due to instability, timeouts, or memory/resource limits.
Run the SQL query directly in PROD to detect any slowness or inefficiencies that could affect refresh time.
Use Table.RowCount after date filtering to ensure valid data is returned and partitions are not empty.
Review partitions via XMLA or refresh logs to spot any missing, empty, or malformed partitions after publish.
If the issue still persists, I suggest reaching out to Microsoft Support by raising a ticket. Their team can analyze backend logs and may be able to provide a more accurate resolution.
Below is the link to create support ticket: https://learn.microsoft.com/en-us/power-bi/support/create-support-ticket
If you find this response helpful, please consider marking it as the accepted solution and giving it a thumbs-up to support others in the community.
Thank you & Regards,
Prasanna kumar
Hi @paulgyroh ,
Thank you for reaching out to the Microsoft Fabric Forum Community.
Also, thanks to @lbendlin for the prompt and helpful response.
When you make a metadata change (even just in Power Query) and publish the model to the Service:
1. Power BI drops all existing partitions from Incremental Refresh. Because the structure is considered changed, the engine cannot trust that the partitions are still valid.
2. On the first refresh after publishing, Power BI. Rebuilds the entire historical dataset from scratch. That means reloading all historical partitions.
Applies transformations for each partition (which may or may not fold back to SQL). Keeps new data + old data in memory simultaneously during processing
3. This causes:
Massive memory spikes
Long refresh durations
Higher likelihood of “not enough memory” or “resource governing” errors
Especially problematic on smaller capacity SKUs.
Publishing takes long because Power BI is rebuilding internal metadata, recalculating dependencies, and possibly validating the model against the data source.
If you find this response helpful, please consider marking it as the accepted solution and giving it a thumbs-up to support others in the community.
Thank you!
Thanks for this @prasannag, I'm still digesting all the implications of what you said. I'm still not clear on the answers to my specific question.
Q: So, why would the model refresh everywhere but on the PROD Power BI service?
Publishing is successful everywhere.
Paul
If you make meta data changes and republish the report to the workspace then the Incremental Refresh partitions will be destroyed and recreated. That can cause substantially larger load than during normal incremental refresh.
Thanks for chiming in here @lbendlin , the only thing I change between Stage and Prod publishing is in Power BI Desktop, verify that the Semantic Model is bound to the correct PROD database parameter under Home → Transform Data → Edit Parameters. Would that be considered metadata to you?
Paul
yes, yes it would.
Gotcha, that explains why publishing such a large model takes so long, but how does that impact the refresh?
Paul
Check out the March 2025 Power BI update to learn about new features.
Explore and share Fabric Notebooks to boost Power BI insights in the new community notebooks gallery.
User | Count |
---|---|
34 | |
25 | |
25 | |
13 | |
13 |
User | Count |
---|---|
45 | |
31 | |
29 | |
16 | |
10 |