Slow performing applications reduce user engagement, customer satisfaction, and eventually lowers revenue. Application-database inefficiency (e.g. network latency, slow queries) is a primary cause of performance bottlenecks. This article shows how Heimdall Data’s auto-caching solution with ElastiCache improves performance with no code changes, while preventing stale cache data.

Systems such as Amazon Aurora provide dynamic scaling and performance. Another blog covers horizontal (more servers) and vertical (larger servers) scaling and the trade-offs. Database scaling can be costly, especially for commercially licensed databases. Others have used Amazon ElastiCache to improve responsiveness. However, developers are still challenged to know what to cache, what to invalidate and ensure data is up to date. This requires manual application code changes. We can now automate caching and invalidation for Amazon ElastiCache with Heimdall Data.

Heimdall Data is an intelligent query routing and caching data access layer that is installed in a distributed way on each EC2 instance.
Heimdall Data Software Packaging:
  • Database proxy for RDS: Aurora, MySQL, SQL Server 2008+, PostgreSQL
  • JDBC Driver: Oracle and any other JDBC complaint database.

For deployment, the only application-level change is to modify the host/port or JDBC URL to route through Heimdall. Figure 1 is a sample architecture diagram for a MySQL proxy configuration.

In proxy mode, there are two types of deployment:

  • Distributed mode: A proxy residing on each EC2 application instance, for optimal performance (shown above)
  • Centralized mode: One EC2 instance proxy servicing many application servers

The proxy provides two levels caching: 1) locally on the application server and 2) on ElastiCache (akin to an L1/L2 cache). As SQL is sent from the application to the database, the proxy responds from the cache, and routes requests to different servers (for use in load-balancing and read/write split). All this functionality is provided by Heimdall Data requiring zero code changes.
This article covers how the Heimdall system with Amazon ElastiCache and RDS can be deployed with an existing application. You can find a demo instance here.
Users testing with their own infrastructure require:

Script Installation of the Heimdall System
For users installing Heimdall via the one-line installation process, it will download and install the Heimdall Central Manager and proxy. The default user id is “admin”. If an Amazon instance is successfully detected, the password will be the instance id, otherwise the default password is “heimdall”.

Example Overview
The example uses an Aurora database supporting a WordPress application. Users can explore a live site without setting up their own infrastructure here. Role-based security will prevent modifications to the site but users can explore all features provided with the sample application setup. Testing in your own environment will require configuring an instance as detailed above.

Getting Started
Once the Heimdall Central Manager (HCM) is running on an instance, access with the server URL and port 8087. For an already configured server, the default tab is the Status tab which displays current server and system status. For a new installation, users are directed to the Wizard tab.

Using the Heimdall-for-AWS Wizard
Heimdall has a wizard designed for Amazon. Select AWS Detect, otherwise perform Manual Configuration. The goal is to connect the database system and caching infrastructure.

If you see a screen requesting AWS IAM credentials, credentials can be added in this window or the IAM credential added to the instance as an IAM instance role through the AWS interface. Then, select AWS Detect again.
Step 1. Amazon RDS cluster and ElastiCache Redis will be automatically populated once detected. Select the appropriate RDS cluster and ElastiCache cluster from the drop-down list. If you do not have an ElastiCache Redis instance available, then this can be left blank, and later, the “local cache” option selected. Once the information has been selected, click Next.

Step 2. Specify the database server and connection type. This includes the host name, driver, user name, password, and port:

Step 3. Provide the cache configuration. Amazon Elasticache using Redis is automatically detected. You may use the other cache options. If you have no cache infrastructure, testing can be done using local cache feature, but note that invalidation information will not be shared if testing with multiple application nodes:

Step 4. The next window provides settings on logging and the use of a proxy. If the database is to be used as a proxy (e.g. MySQL, PostgreSQL), then Enable Proxy should be checked and a proxy port chosen. The localhost option should also be unselected if using a proxy on another instance other than the application using the proxy is hosted on. For the management server to start the proxy on it’s own, we also want to select the management server proxy option, otherwise we would have to install the proxy manually.
Step 5. Summary screen 
Step 6. After clicking next, the system provides a summary of important items to follow.
Once Submit is selected on this final page, the configuration is updated. On the demo system, guest users will receive a warning indicating they cannot update the server configuration. Browsing the configuration tabs provides info on how the system works.

Step 7. The Virtual Databases tab provides connection info for the application. In this case, the application will access via the MySQL proxy on localhost. If using your own instance, make any changes to this information and click Commit to finalize the configuration.
Cache settings can also be changed on the VDB tab as well:

Step 8. The Data Sources tab provides the database connection settings such as connection pooling, load balancing, high availability, and query routing (read/write split). If using your own instance, make any changes to this information and click Commit to finalize the database configuration.

Step 9. The Rules tab controls how queries are cached, routed, and transformed when received from the application. The default rules configured is to cache all traffic not in transactions, forward selected traffic to a read-only source, and log query traffic. Users can change rules dynamically at any time without restarting the application or database. Information on how the rules are configured are available by clicking on the Help button. If using your own instance, make any changes to this information and click Commit to finalize the rule configuration.
Step 10. When configuring the application to use Heimdall proxy, the only change required is to change the database configuration to match the Heimdall proxy. In the MySQL demo example, the existing MySQL configuration in WordPress was changed to This change is usually quite straightforward, but is specific to your application installation. Details on the URL to use for the Heimdall JDBC Driver are in the JDBC section of the Virtual Database details, or for the proxy, in the Proxy Configuration section.
Step 11. The dashboard below provides information on query traffic and server performance for a WordPress application. Notice the average query time from the cache is 50 microseconds compared to 1000 microseconds from the database. With caching, there is a data layer performance boost of over 20 times!  With a 90% cache hit rate, the load on the database is reduced significantly allowing for more users to be supported on the same database infrastructure. There were no changes to the application besides the database URL/host+port change – no coding or database system changes were required.

Heimdall Data safely, automates caching for Amazon Elasticache. Configuration is simple requiring zero disruption to the application or Amazon RDS. Users will experience up to 5x improvement in performance and scale. Heimdall is available as a free trial on Amazon Marketplace or downloaded at the Heimdall Data website.