One of the first questions any potential customer asks us is “how much can caching improve my application’s performance.”  This is a very difficult question to answer beyond “it depends”.  What is easier is to describe a methodology which will provide a good estimate on the impact the database has on the application.  With this, it is much easier to estimate the overall impact of optimization.

The first step towards this “SQL Impact” estimate is to measure the application wait time overall.  For web applications, this can be as simple as analyzing the web access logs.  In Apache, this requires a log format that includes the “%D” field, for “time taken”. Here, for a given period of time, the total access time can be summed up, i.e. for a ten minute period, all access times can be added together to get the total access time.  In most cases, this will be well in excess of the actual time period, i.e. an hour of access may be spread across many threads during a ten minute period.

An alternate way to gather this piece of information is to create a local benchmark that replicates use activity.  This can be done with a variety of tools, but the goal is to record the actual response time of each query, preferably several queries across a single connection.

The second piece is to get a similar wait time analysis for the SQL traffic for the same period.  This can be done by enabling the SQL logging function on Heimdall.  By adding all the execution times for all queries together during the test period, the total SQL wait time can be computed.

Once the two pieces of data are gathered for the same time period, they can be compared to isolate what the percent of wait time at the application level is as a result of data access.  If for example, the total application time is one hour, and database access time is thirty minutes, then the overall impact on the application would be 50%.  Implementing caching at a 50% cache hit rate in such a situation could result in an overall 25% performance improvement.

The advantage of this methodology is that it quickly gets to the point of what is possible–instead of spending time tuning the cache and trying to achieve an optimal cache hit rate, it bounds expectations to determine if this effort will be worth it before it to continue optimizing the database performance.

Next in this series, we will apply this methodology to analyze an application from start to end, in order to show the impact of SQL on the application, including with different amounts of latency between the application and the database.  This will include the scripts and configuration steps to configure the application, sample data, and the overall results.