Save on Azure SQL Databases

Hi all! In this article, I will talk about how our company was able to save money by implementing Azure SQL Elastic Pool. Additional examples will be Azure REST API for enumeration of SQL Servers, for enumeration of SQL databases and for getting metrics.

The most inquisitive at the end of the article will find out the amount saved. Those who are too lazy to read can immediately watch sources

Azure SQL Elastic Pool

I will duplicate the link to the wonderful documentation from Microsoft. In short, how can we save money? – by combining several bases into one pool – while the resources of the base pool will be spent together.

If you look rateswe will see that the resources for the pool eDTU are more expensive than resources for a single base DTU (for example, 100 DTUs will cost $147 per month, and 100 eDTUs will cost $220). A simple conclusion follows from this – if you have a database with a more or less uniform load, then there is no point in adding it to the pool. And vice versa – if there are a bunch of databases with uneven load and peaks spaced in time – then they are ideal for pooling.

The question arises – how to find out the total load for all databases from one server? In the portal, at a given time, you can see the DTU graph for one base – but, unfortunately, there is no way to look at the summary graph that we need so much. Calling for help Azure REST API.

Enumeration and getting metrics

To work with Azure, we need the following options. First, let’s write the code for receiving a token. Then we get everything for our subscription servers. After that, for what we need servers we get base list. We will also write code for getting metrics.

Let’s add logicwhich will store the metrics for last few days to disk in formsuitable for subsequent visualization.

Attention: In our case, all bases use DTU Purchase Modelwhich is why the metric is subtracted here dtu_used. If you are using for databases vCore Purchase Model, then other metrics (for example, cpu_percent) will be required to estimate the load. You can see the list of available metrics for different Azure resources here. The method more or less universal – for other resources, metrics should also work.

Data visualization in Power BI

It is implemented simply. Open this in Power BI sample. We feed him json obtained in the previous step and we get graphs of this kind (in the picture there is a server with ~ 90 bases). The top picture shows the average average per hour, on the lower average maximum in an hour. For the most accurate results when subtracting metrics, I used the minimum interval in one minute (respectively, each hourly average is calculated from 60 values)

From the graphs, we can see that this type of load is great for a pool – a lot of lightly loaded databases with infrequent peaks (see similar picture in documentation).

Let’s estimate the savings – we have about 90 databases of 10DTU each (total $ 1260 per month). We can redirect all this load to a pool of 50eDTU in size ($ 110 per month).

Total on one server – $1100 savings every month – not bad. But before moving to the pool, you need to analyze limits (since the databases are lightly loaded with a probability of 99.9% they will be more than enough) and do not forget about the most important part –


If you cannot completely stop the load on the databases at the time of working with resources, then I strongly advise you to make sure on the DEV \ QA environment that such operations as:

  • Adding a database to the pool

  • Removing a database from a pool

  • Scaling the pool up

  • Scaling the pool down

Do not lead to negative consequences for the product.

Migration process

If the testing was successful, you can start the migration. I recommend a gradual approach. First, on the DEV environment, they threw some of the most lightly loaded databases into the pool, watched the load in the pool for several days, and the behavior of the system. Then you can add the remaining bases, watch again. If everything is ok on the DEV environment, we perform the migration to QA according to the same principle and only then to the production.

It also makes sense to set the DTU per database limit to at least two times less than the pool size. It does not hurt to create alerts on the pool – they should be triggered when the load is close to the maximum for the pool.


We have several servers in our company in different environments (DEV\QA\STG\PRD\PRDWEU) with a large number of databases. Due to a simple transition from single databases to pools, we began to save ~ $ 8,000 per month. At the same time, everything is fine with performance – as an illustration, in the picture below, a pool with 86 bases. It is a pleasure to monitor the load on the pool – here the total graph for all bases is already shown

If you have a similar infrastructure – a lot of lightly loaded databases with rare peaks, I strongly advise you to study the Azure SQL Elastic Pool topic.

Thanks to all!

Similar Posts

Leave a Reply