Postgres is an incredibly mature and battle hardened database that powers many of the applications you rely on daily. From Instacart, Instagram, various Apple services, and Spotify, chances are that data related to you resides in a handful of well scaled and highly tuned Postgres instances across the globe.
Out-of-the-box, Postgres works well enough. If you’re lucky, and you find yourself with a popular application (which generates a lot of load through reads and/or writes), you’ll likely find yourself needing to scale Postgres. Fortunately, you have a few options, starting with throwing more hardware at the problem by adding faster disks, additional CPUs, and more memory. You can think of these options as vertically scaling Postgres. Vertically scaling Postgres can increase throughput and reduce latencies; however, these options don’t address resilience.
Addressing resilience ultimately requires making copies of the underlying data, which can be achieved through shared storage, read replicas, or via distributed versions of Postgres like AWS’s Aurora, Yugabyte, and Google’s AlloyDB (just to name a few!). These tactics enable you to horizontally scale Postgres and of these options, one of the easiest to spin up are read replicas.
There are certainly trade offs when using read replicas. First and foremost, read replicas are asynchronous copies of your primary Postgres database. Changes made on the primary are streamed (via the WAL or write ahead log) to the replicas with a slight delay. What’s more, read replicas are read-only, meaning you can't write data directly to them. You can only read from them, which reduces resource demands and avoids data conflicts. It’s also easy enough to have multiple read replicas spread across different servers or regions, distributing read traffic and improving responsiveness for user queries.
In essence, read replicas offer a backup of your database and in a worst case scenario, you can turn a read replica into the primary should your primary database go down. Of course, there’s some probable downtime if this were to happen because, at a minimum, you’ll need to reroute writes to the newly christened primary.
Show me the code, man
Setting up a read replica doesn't take a lot of effort! For instance, if you want to see how they work locally and you're using Docker, Bitnami's Postgres Docker image couldn't make it any easier. I created a Github repository, named Hoodoo, which contains a Docker Compose file demonstrating read replicas in action.
If you clone the repository and run
docker compose up -d, you'll find that two Postgres instances have started: one primary and a replica. What's more, you can decompress a provided SQL script and import this data into the primary, enabling you to work with an
employees database. You'll notice after you import the
employees database into the primary, the replica will reflect the same (i.e. replication in action, baby!).
As I mentioned, Bitnami has packaged Postgres making creating a replica effortless. The key to configuring the primary and one or more replicas is via Docker Compose individual
environment settings. For instance, the primary is configured with the following
As you can see above, the primary is configured as
master and a special username and password are configured. The replica's
environment is configured as follows:
Note how the replica is configured as a
slave; moreover, this service is configured to point at the primary via a host and port number.
You can connect to either database via the
psql command - the primary is exposed on port 5432 and the replica on 5434. To get a true feel for how things work in a read replica environment, you can issue a write to the primary and then in another terminal window, issue a read on a replica (i.e. a
select) and the data will be there.
For instance, first verify the count of
employee records in the database via a
The answer should be 300,024. Verify this on both the primary and the replica. Next, insert a new employee into the primary via the below SQL statement:
Finally, you can run the same
select count(*) command on the replica and you'll see the count has increased by 1. Magic! My hope is that the Hoodoo repository makes it super easy to see Postgres read replicas in action and enable you to get a feel for how to employ this strategy for horizontally scaling Postgres.
Horizontal scaling tradeoffs
For a resilient backup of your database via read replicas, you do make a few tradeoffs. There's a slight delay in replication between a primary and a replica. This delay is a function of the speed of light and processing power. You can use read replicas to offload traffic from a primary, but it’s only reads. Depending on your load, you’ll also need to pay special attention to the primary’s WAL settings as if they aren’t tuned correctly, your primary’s performance could suffer. Finally, spinning up one or more read replicas will naturally cost you more money, most likely above and beyond what you’ve already spent on vertically scaling your database (although you’ll most likely reap some savings by lowering the primary’s load). Read replica instances are an additional asset that’ll need monitoring and ongoing maintenance (i.e. upgrades, etc) too.
If your Postgres database is reaching the limits of vertical scaling and you’re looking to increase reliability, a read replica strategy can be beneficial. Read replicas can increase read performance because you can offload read-heavy workloads from the primary server. Moreover, read replicas improve scalability: you can add more of them to handle increasing read traffic without impacting the primary server. You can also geo-distribute read replicas to reduce latencies. What’s more, read replicas enhance availability. If needed, you can promote a replica to become the primary if the primary fails.
Can you dig it?