PostgreSQL for Database failover

For someone new to PostgreSQL, it’s database failure solution is very database oriented, not application oriented.  It has the ability to synchronize data between nodes using Slony-I or streaming replication, but this doesn’t provide a built-in answer for application failover between the nodes at the application layer.  For this, PgPool2 is often used, which is a separate open source proxy.  This method adds an additional hop to all queries, and can be very complex to install, while requiring modifications to the database to fully function.  In our experience, this is difficult to do, and ends up adding another single point of failure (i.e. what provides resiliency for PgPool2?), and requires custom modifications to the PostgreSQL install that may be impacted with future updates.  Heimdall provides a low latency, easy to configure failover configuration for PostgreSQL that avoids the single point of failure for data processing, and fast failovers.

One of the common issues across the board for all database users is the lack of understanding of how database failures behave. For example, if a successful database failover was to occur, will this also guarantee the application is still online?  Does a DB failure require application restarts?  If not, an automated database failover is practically useless.

Above are some architectural considerations to think about when designing application to database failover.  In the next series, we will explore failover with Jira and PostgreSQL, and detail the scripts necessary to make it work seamlessly with Heimdall.

This concludes our 4-part blog series on “Understanding Database Failover in the Cloud and Across Regions”. If you have any comments or questions, please send me an email: