With this short post, I would like to dispel one misunderstanding related to the AWR analysis of databases running on Oracle Exadata. For almost 10 years, I have been constantly confronted with the question: What is the contribution of Exadata Software to productivity? Or with the use of newly formed words: to what extent does the work of this or that database go?
Often this correct question, in my opinion, is given the wrong answer with reference to the AWR statistics. It presents the system waits method, which interprets the response time as the sum of the processor time (DB CPU) and the waiting time of various classes.
With the advent of Exadata, AWR statistics have specific system expectations related to the work of Exadata Software. As a rule, the names of such expectations begin with the word “cell” (the cell is called the Exadata Storage server), of which the most common expectations are “cell smart table scan”, “cell multiblock physical read” and “cell single block physical read”.
In most cases, the share of such Exadata waits in the total response time is small, and therefore they do not even fall into the Top10 Foreground Events by Total Wait Time section (in this case, they should be looked for in the Foreground Wait Events section). It was with great difficulty that we found an example of daily AWR from our customers, in which Exadata expectations got into the Top10 section and in total amounted to about 5%:
Total Wait Time (sec)
% DB time
SQL * Net more data from dblink
cell single block physical read
User I / O
Sync ASM rebalance
cell multiblock physical read
User I / O
direct path read
User I / O
SQL * Net message from dblink
cell smart table scan
User I / O
direct path read temp
User I / O
enq: TM – contention
The following conclusions are often drawn from such AWR statistics:
1. The contribution of Exadata magic to the database performance is not high – it does not exceed 5%, and the database “exadata” is bad.
2. If such a base is transferred from Exadata to the classic “server + array” architecture, then the performance will not change much. Because even if this array turns out to be three times slower than the Exadata storage system (which is hardly possible for modern All Flash arrays), then multiplying 5% by three we get an increase in the share of I / O expectations up to 15% – such a database will probably survive!
Both of these conclusions are inaccurate, moreover, they distort the understanding of the idea embodied in Exadata Software. Exadata doesn’t just provide fast I / O, it works in a fundamentally different way from the classic server + array architecture. If the work of the database is really “exadatted”, then the SQL logic is transferred to the storage system. Storage servers, thanks to a number of special mechanisms (primarily Exadata Storage Indexes, but not only), find the necessary data themselves and send the DB servers. They do this quite efficiently, so the share of typical Exadata expectations in the total response time is small.
How will this share change outside of Exadata? How will this affect the overall performance of the database? Testing will answer these questions best. For example, waiting for a “cell smart table scan” outside of Exadata can turn into such a heavy Table Full Scan that I / O will take all the response time and performance will degrade dramatically. That is why it is wrong, when analyzing AWR, to consider the total percentage of Exadata’s expectations as the contribution of its magic to performance, and even more so to use this percentage to predict performance outside of Exadata. To understand how much the work of the database is “exaggerated”, you need to study the AWR statistics of the “Instance Activity Stats” section (there are many statistics with meaningful names) and compare them with each other.
And in order to understand how the database will feel outside of Exadata, it is best to make a clone of the database from the backup on the target architecture and analyze the performance of this clone under load. As a rule, Exadata owners have such an opportunity.
Author: Alexey Struchenko, Head of the DB “Jet Infosystems”