Understanding the cost of Microsoft Fabric is crucial for using Fabric efficiently. Recently Microsoft published a new tool to help estimate how much Fabric capacity you really need. In this post, we’ll explore how to use the calculator effectively, what inputs matter most, and why estimating your Fabric usage is a critical part of managing budgets and avoiding surprises.
Capacity
First let’s talk a little bit about capacity in Fabric as it is one the most critical part of the system. You can think capacity as a fuel of your data platform. If you run out of it everything will stop. If you have too much of it you don’t have enough horsepower’s to consume it. That’s why it is critical to optimize the capacity with the consumption.
As a rule of thumb, you should always have little bit more capacity than you use. It is not a first or last time as I have seen that Fabric has run out of capacity, because someone decided to run some migrations in production. As a good thumb rule try to have at least little bit extra capacity available all the time. Fabric supports smoothing to handle the short overspendings, but if you overspend enough you won’t be able to run the notebooks, pipelines or Power BI reports.
Basic Rules to Estimate Capacity
Fabric capacity comes in squares. 2, 4, 8, 16, 32 etc. 2 is basically enough for handful of database tables, few Dataflows and couple Notebooks. 4 is enough for ~20-30 normal size tables and you can easily run 20 Notebooks with it. 8 is more than enough for SMB, that has lets say ~3 systems (ERP,CRM,HR) which are integrated into Fabric.
One thing to note is that SKUs smaller than F64 require a Power BI Pro or Premium Per User license for each user consuming Power BI content. Content in workspaces on F64 or larger Fabric capacities is available for users with a Free license if they have viewer role on the workspace. For more information about licensing related to capacity check out this Microsoft Learn page.
Also remember that you can purchase multiple capacity “instances”, which you can assign to different workspaces. For example you can have 4 CU’s (or F4) for development, 4 CU’s for test environment and F8 for production. You can also divide them for business units or cost centers if you want to. You can assign one capacity into multiple workspaces.
Tweaking and getting everything out of your capacity units is a worth of own blog post, but in general Notebooks are the cheapest way to run workloads and Data Flow Gen 2 is the most expensive.
The Calculator
The Microsoft Fabric SKU Estimator helps suggest an appropriate Fabric SKU and related storage needs based on the user’s input across different Fabric workloads. So it is not only about CU’s, but I think that is the most useful feature in it. The calculator has tendency to exaggerate the capacity a little bit, but I don’t think that is bad. You need to have some extra reserve and the calculator does not know exactly how your data looks like.
For example lets say we have 350 GB of data that we are going to bring into Fabric. We are running 4 daily batch cycles and we have 10 tables that we are ingesting. As workloads could have 2 data scientists that are running 10 ML model trainings in a day, they are using some Notebooks and Data Pipelines and Dataflow Gen 2 (Data Factory).
After running the estimation the calculator estimates us F16 capacity. That is actually not that bad. Maybe we could get away with F8, but as a safe measurement the F16 is safe bet.

As seen in this example the OneLake operations (database) is not consuming much calculation units and Spark Jobs are basically free. Even in this example the Data Factory is chewing up to 13% with just a two hours of daily operations. If our pipelines were running any longer, lets say 6 hours a day, that could mean that our Dataflows and Data pipelines are chewing up to 35% of our capacity. By using Notebooks, the estimate would be somewhere around 5-10% (because usually you can achieve more in a shorter time with Notebooks…).
If you add an eventstream the calculator always starts with F64. In my experience you can run the eventstream with lower capacity, but it can easily take up to F32. I tried to use it to ingest ~30K events a day and it ended up taking ~6-8 CU’s. So I recommend to take at least F8 if you have any plans to use Eventstream.

If you want to run 100GB SQL database in Fabric for 24/7 and with 2 vCores the recommendation is F8. On west europe the F8 costs 1,129.10€/month. If we would just spin up a 2 vCore Business Critical Azure SQL Server with Standby Replica, it would cost $488.92€/month. Of course the F8 is just an estimation and the real workload could be around F4, which would cut the price in half…
Last interesting part of the calculator is that OneLake storage amount is calculated with a formula of Total size of the data * 1.7. For 100GB you need 170GB of OneLake storage. For 1000GB of data you need 1700GB of OneLake storage. I tried to reverse engineer why it is exactly 1.7, but the calculation happens in backend through API and you cannot see the calculation rules by looking the UI JavaScript code (bummer).
Known Limitations
It’s worth noting that the calculator does not take into account workload patterns like seasonal spikes or one-off heavy operations. Also, it doesn’t simulate concurrent user access or sudden growth in data volumes. For example, if your workloads have peak traffic in the mornings and downtime at night, those dynamics aren’t factored into the output.
Summary
The Microsoft Fabric SKU Estimator is a practical tool for approximating required capacity based on workload type, volume, and frequency. It helps assess how many compute units (CUs) are consumed across services like Notebooks, Data Pipelines, Dataflows, and Eventstreams. Smaller workloads may fit within F4 or F8 SKUs, while heavier, concurrent operations or streaming scenarios often require F16 or higher.