PostgreSQL 12 High Availability Cookbook - Shaun Thomas - E-Book

PostgreSQL 12 High Availability Cookbook E-Book

Shaun Thomas

0,0
40,81 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

A comprehensive guide to understanding key techniques for architecture and hardware planning, monitoring, replication, backups, and decoupling




Key Features



  • Newly updated edition, covering the latest PostgreSQL 12 features with hands-on industry-driven recipes


  • Create a PostgreSQL cluster that stays online even when disaster strikes


  • Learn how to avoid costly downtime and data loss that can ruin your business



Book Description



Databases are nothing without the data they store. In the event of an outage or technical catastrophe, immediate recovery is essential. This updated edition ensures that you will learn the important concepts related to node architecture design, as well as techniques such as using repmgr for failover automation. From cluster layout and hardware selection to software stacks and horizontal scalability, this PostgreSQL cookbook will help you build a PostgreSQL cluster that will survive crashes, resist data corruption, and grow smoothly with customer demand.






You'll start by understanding how to plan a PostgreSQL database architecture that is resistant to outages and scalable, as it is the scaffolding on which everything rests. With the bedrock established, you'll cover the topics that PostgreSQL database administrators need to know to manage a highly available cluster. This includes configuration, troubleshooting, monitoring and alerting, backups through proxies, failover automation, and other considerations that are essential for a healthy PostgreSQL cluster. Later, you'll learn to use multi-master replication to maximize server availability. Later chapters will guide you through managing major version upgrades without downtime.






By the end of this book, you'll have learned how to build an efficient and adaptive PostgreSQL 12 database cluster.




What you will learn



  • Understand how to protect data with PostgreSQL replication tools


  • Focus on hardware planning to ensure that your database runs efficiently


  • Reduce database resource contention with connection pooling


  • Monitor and visualize cluster activity with Nagios and the TIG (Telegraf, InfluxDB, Grafana) stack


  • Construct a robust software stack that can detect and avert outages


  • Use multi-master to achieve an enduring PostgreSQL cluster



Who this book is for



This book is for Postgres administrators and developers who are looking to build and maintain a highly reliable PostgreSQL cluster. Although knowledge of the new features of PostgreSQL 12 is not required, a basic understanding of PostgreSQL administration is expected.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB

Seitenzahl: 945

Veröffentlichungsjahr: 2020

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



PostgreSQL 12 High Availability CookbookThird Edition

 

 

 

Over 100 recipes to design a highly available server with the advanced features of PostgreSQL 12

 

 

 

 

 

 

 

 

 

Shaun Thomas

 

 

 

 

 

 

 

 

 

 

 

BIRMINGHAM - MUMBAI

PostgreSQL 12 High Availability Cookbook Third Edition

Copyright © 2020 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author(s), nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

 

Commissioning Editor:Amey VarangaonkarAcquisition Editor:Devika BattikeContent Development Editor:Roshan KumarSenior Editor: Jack CummingsTechnical Editor:  Manikandan KurupCopy Editor: Safis EditingProject Coordinator: Aishwarya MohanProofreader: Safis EditingIndexer: Priyanka DhadkeProduction Designer:Deepika Naik

First published: July 2014 Second edition: February 2017 Third edition: February 2020

Production reference: 1250220

Published by Packt Publishing Ltd. Livery Place 35 Livery Street Birmingham B3 2PB, UK.

ISBN 978-1-83898-485-4

www.packt.com

 

Packt.com

Subscribe to our online digital library for full access to over 7,000 books and videos, as well as industry leading tools to help you plan your personal development and advance your career. For more information, please visit our website.

Why subscribe?

Spend less time learning and more time coding with practical eBooks and Videos from over 4,000 industry professionals

Improve your learning with Skill Plans built especially for you

Get a free eBook or video every month

Fully searchable for easy access to vital information

Copy and paste, print, and bookmark content

Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.packt.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at [email protected] for more details.

At www.packt.com, you can also read a collection of free technical articles, sign up for a range of free newsletters, and receive exclusive discounts and offers on Packt books and eBooks. 

Contributors

About the author

Shaun Thomas has been experimenting with PostgreSQL since late 2000 and serves as a database consultant, teacher, blogger, and support engineer with 2ndQuadrant. He has presented at conferences such as Postgres Open, 2Q PGCONF, and PgConf on topics such as handling extreme throughput, high availability, failover techniques, monitoring tools, database architecture, multi-master conflict avoidance, and high availability upgrade concepts. He believes in a multi-disciplinary approach when it comes to high availability.

He believes that PostgreSQL has a stupendous future ahead, and he can't wait to see the advancements subsequent versions will bring.

 

About the reviewers

Ilja Everilä is a software developer and consultant with over a decade of experience in various projects, currently employed at Siili Solutions Oyj. He has done both frontend and backend work and database administration in PostgreSQL. All in all, he is very much into database work and found this book an interesting read on an important subject.

 

Richard Yen received a bachelor of science in electrical engineering and computer science from the University of California, Berkeley. He started his career as a developer and DBA for Turnitin, an ed-tech start-up, and went on to work as a support engineer for EnterpriseDB, a professional services and software company focusing on PostgreSQL. His wealth of experience interacting with many clients, ranging from small start-ups to large financial and governmental institutions, has exposed him to many technologies, several of which are covered in this book.

 

 

 

 

Packt is searching for authors like you

If you're interested in becoming an author for Packt, please visit authors.packtpub.com and apply today. We have worked with thousands of developers and tech professionals, just like you, to help them share their insight with the global tech community. You can make a general application, apply for a specific hot topic that we are recruiting an author for, or submit your own idea.

Table of Contents

Title Page

Copyright and Credits

PostgreSQL 12 High Availability Cookbook Third Edition

About Packt

Why subscribe?

Contributors

About the author

About the reviewers

Packt is searching for authors like you

Preface

Who this book is for

What this book covers

To get the most out of this book

Download the color images

Conventions used

Sections

Getting ready

How to do it…

How it works…

There's more…

See also

Get in touch

Reviews

Architectural Considerations

Setting expectations with RPO

Getting ready

How to do it...

How it works...

There's more...

Defining timetables through RTO

Getting ready

How to do it...

How it works...

There's more...

This may seem familiar

Node counts

Picking redundant copies

Getting ready

How to do it...

How it works...

There's more...

Selecting locations

Getting ready

How to do it...

How it works...

There's more...

See also

Having enough backups

Getting ready

How to do it...

How it works...

There's more...

See also

Considering quorum

Getting ready

How to do it...

How it works...

There's more...

Introducing indirection

Getting ready

How to do it...

How it works...

There's more...

See also

Preventing split brain

Getting ready

How to do it...

How it works...

There's more...

Incorporating multi-master

Getting ready

How to do it...

How it works...

There's more...

See also

Leveraging multi-master

Getting ready

How to do it...

How it works...

There's more...

See also

Hardware Planning

Planning for redundancy

Getting ready

How to do it...

How it works...

There's more...

See also

Having enough IOPS

Getting ready

How to do it...

How it works...

There's more...

A working example

Making concessions

Sizing storage

Getting ready

How to do it...

How it works...

There's more...

Real-world example

Adjusting the numbers

Incorporating the spreadsheet

Investing in a RAID

Getting ready

How to do it...

How it works...

There's more...

See also

Picking a processor

Getting ready

How to do it...

How it works...

There's more...

Simultaneous multithreading 

Clock boosting

Power usage

See also

Allocating enough memory

Getting ready

How to do it...

How it works...

There's more...

Exploring nimble networking

Getting ready

How to do it...

How it works...

There's more...

A networking example

Remember redundancy

Saving the research

Managing motherboards

Getting ready

How to do it...

How it works...

There's more...

See also

Selecting a chassis

Getting ready

How to do it...

How it works...

There's more...

Saddling up to a SAN

Getting ready

How to do it...

How it works...

There's more...

See also

Tallying up

Getting ready

How to do it...

How it works...

There's more...

Protecting your eggs

Getting ready

How to do it...

How it works...

There's more...

Minimizing Downtime

Determining acceptable losses

Getting ready

How to do it...

How it works...

Configuration – getting it right the first time

Getting ready

How to do it...

How it works...

There's more...

See also

Configuration – managing scary settings

Getting ready

How to do it...

How it works...

There's more...

Distinct settings

More information

See also

Identifying important tables

Getting ready

How to do it...

How it works...

There's more...

Reset stats

Using pgstattuple

See also

Defusing cache poisoning

Getting ready

How to do it...

How it works...

See also

Terminating rogue connections

Getting ready

How to do it...

How it works...

There's more...

Reducing contention with concurrent indexes

Getting ready

How to do it...

How it works...

There's more...

No transactions

One at a time

Dangers of OLTP use

See also

Managing system migrations

Getting ready

How to do it...

How it works...

There's more...

See also

Managing software upgrades

Getting ready

How to do it...

How it works...

There's more...

See also

Mitigating the impact of hardware failure

Getting ready

How to do it...

How it works...

There's more...

Copying WAL files more easily

Built-in delay

Adding compression

Secondary delay

See also

Applying bonus kernel tweaks

Getting ready

How to do it...

How it works...

There's more...

Some additional background

Be wary of THP

Proxy and Pooling Resources

Exploring the magic of virtual IPs

Getting ready

How to do it...

How it works...

There's more...

Obtaining and installing HAProxy

Getting ready

How to do it...

How it works...

See also

Configuring HAProxy to load balance PostgreSQL

Getting ready

How to do it...

How it works...

There's more

Determining connection costs and limits

Getting ready

How to do it...

How it works...

There's more...

Installing PgBouncer

Getting ready

How to do it...

How it works...

See also

Configuring PgBouncer safely

Getting ready

How to do it...

How it works...

There's more...

What about pool_mode?

Problems with prepared statements

See also

Connecting to PgBouncer

Getting ready

How to do it...

How it works...

See also

Listing PgBouncer server connections

Getting ready

How to do it...

How it works...

There's more...

See also

Listing PgBouncer client connections

Getting ready

How to do it...

How it works...

There's more...

See also

Evaluating PgBouncer pool health

Getting ready

How to do it...

How it works...

There's more...

See also

Changing PgBouncer connections while online

Getting ready

How to do it...

How it works...

There's more...

See also

Enhancing PgBouncer authentication

Getting ready

How to do it...

How it works...

There's more...

See also

Troubleshooting

Performing triage

Getting ready

How to do it...

How it works...

There's more...

Installing common statistics packages

How to do it...

How it works...

Evaluating the current disk performance with iostat

Getting ready

How to do it...

How it works...

There's more...

See also

Tracking I/O-heavy processes with iotop

Getting ready

How to do it...

How it works...

There's more...

See also

Viewing past performance with sar

Getting ready

How to do it...

How it works...

There's more...

See also

Correlating performance with dstat

Getting ready

How to do it...

How it works...

See also

Interpreting /proc/meminfo

Getting ready

How to do it...

How it works...

There's more...

See also

Examining /proc/net/bonding/bond0

Getting ready

How to do it...

How it works...

Checking the pg_stat_activity view

Getting ready

How to do it...

How it works...

There's more...

See also

Checking the pg_stat_statements view

Getting ready

How to do it...

How it works...

There's more...

Reset the status

Catch more queries

See also

Deciphering database locks

Getting ready

How to do it...

How it works...

There's more...

See also

Debugging with strace

Getting ready

How to do it...

How it works...

There's more...

See also

Logging checkpoints properly

Getting ready

How to do it...

How it works...

There's more...

See also

Monitoring

Figuring out what to monitor

Getting ready

How to do it...

How it works...

There's more...

Installing and configuring Nagios

Getting ready

How to do it...

How it works...

There's more...

See also

Configuring Nagios to monitor a database host

Getting ready

How to do it...

How it works...

There's more...

See also

Enhancing Nagios with Check_MK

Getting ready

How to do it...

How it works...

There's more...

See also

Getting to know check_postgres

Getting ready

How to do it...

How it works...

There's more...

See also

Installing and configuring Telegraf

Getting ready

How to do it...

How it works...

See also

Adding a custom PostgreSQL monitor to Telegraf

Getting ready

How to do it...

How it works...

There's more...

See also

Installing and configuring InfluxDB

Getting ready

How to do it...

How it works...

There's more...

See also

Installing and configuring Grafana

Getting ready

How to do it...

How it works...

See also

Building a graph in Grafana

Getting ready

How to do it...

How it works...

See also

Customizing a Grafana graph

Getting ready

How to do it...

How it works...

There's more...

See also

Using InfluxDB tags in Grafana

Getting ready

How to do it...

How it works...

There's more...

See also

PostgreSQL Replication

Deciding what to copy

Getting ready

How to do it...

How it works...

Securing the WAL stream

Getting ready

How to do it...

How it works...

There's more...

See also

Setting up a hot standby

Getting ready

How to do it...

How it works...

See also

Upgrading to asynchronous replication

Getting ready

How to do it...

How it works...

There's more...

Cascading replication

Using replication slots

Viewing replication status on a replica

See also

Bulletproofing with synchronous replication

Getting ready

How to do it...

How it works...

There's more...

Being less strict

Being more strict

Enabling extreme durability

See also

Faking replication with pg_receivewal

Getting ready

How to do it...

How it works...

There's more...

See also

Setting up Slony

Getting ready

How to do it...

How it works...

There's more...

See also

Copying a few tables with Slony

Getting ready

How to do it...

How it works...

There's more...

See also

Setting up Bucardo

Getting ready

How to do it...

How it works...

See also

Copying a few tables with Bucardo

Getting ready

How to do it...

How it works...

See also

Setting up pglogical

Getting ready

How to do it...

How it works...

See also

Copying a few tables with pglogical

Getting ready

How to do it...

How it works...

There's more...

See also

Copying a few tables with native logical replication

Getting ready

How to do it...

How it works...

There's more...

No sequences

Tool integration

Keys required for UPDATE and DELETE

See also

Backup Management

Deciding when to use third-party tools

Getting ready

How to do it...

How it works...

Installing and configuring Barman

Getting ready

How to do it...

How it works...

See also

Backing up a database with Barman

Getting ready

How to do it...

How it works...

There's more...

Retention policies

Parallel backup

See also

Restoring a database with Barman

Getting ready

How to do it...

How it works...

There's more...

Streaming replicas

RPO zero

See also

Obtaining Barman diagnostics and information

Getting ready

How to do it...

How it works...

Sending Barman backups to a remote location

Getting ready

How to do it...

How it works...

Installing and configuring pgBackRest

Getting ready

How to do it...

How it works...

There's more...

See also

Backing up a database with pgBackRest

Getting ready

How to do it...

How it works...

There's more...

More backup types

Use full regularly

See also

Restoring a database with pgBackRest

Getting ready

How to do it...

How it works...

There's more...

Installing and configuring WAL-E

Getting ready

How to do it...

How it works...

See also

Managing WAL files with WAL-E

Getting ready

How to do it...

How it works...

There's more...

Recovering WAL files

Backing up the database

Removing old files

See also

High Availability with repmgr

Preparing systems for repmgr

Getting ready

How to do it...

How it works...

Installing and configuring repmgr

Getting ready

How to do it...

How it works...

See also

Cloning a database with repmgr

Getting ready

How to do it...

How it works...

There's more...

Viewing the cluster

Cloning from Barman

See also

Incorporating a repmgr witness

Getting ready

How to do it...

How it works...

See also

Performing a managed failover

Getting ready

How to do it...

How it works...

There's more...

Always watching

Testing the waters

Useful shortcuts

See also

Customizing the failover process

Getting ready

How to do it...

How it works...

There's more...

See also

Using an outage to test availability

Getting ready

How to do it...

How it works...

There's more...

Returning a node to the cluster

Getting ready

How to do it...

How it works...

There's more...

See also

Integrating primary fencing

Getting ready

How to do it...

How it works...

There's more...

See also

Performing online maintenance and upgrades

Getting ready

How to do it...

How it works...

There's more...

See also

High Availability with Patroni

Understanding more about Patroni and its components

Why HAProxy?

Why etcd?

Why Patroni?

The stack

Preparing systems for the stack

Getting ready

How to do it...

How it works...

See also

Installing and configuring etcd

Getting ready

How to do it...

How it works...

There's more...

See also

Installing and configuring Patroni

Getting ready

How to do it...

How it works...

There's more...

Cluster status

Better service management

See also

Installing and configuring HAProxy

Getting ready

How to do it...

How it works...

See also

Performing a managed switchover

Getting ready

How to do it...

How it works...

There's more...

Using an outage to test availability

Getting ready

How to do it...

How it works...

There's more...

Returning a node to the cluster

Getting ready

How to do it...

How it works...

There's more...

Adding additional nodes to the mix

Getting ready

How to do it...

How it works...

There's more...

See also

Replacing etcd with ZooKeeper

Getting ready

How to do it...

How it works...

There's more...

See also

Replacing etcd with Consul

Getting ready

How to do it...

How it works...

There's more...

See also

Upgrading while staying online

Getting ready

How to do it...

How it works...

There's more...

Low-Level Server Mirroring

Understanding our chosen filesystem components

Why DRBD?

Why LVM?

Why XFS?

The stack

Preparing systems for volume mirroring

Getting ready

How to do it...

How it works...

There's more...

See also

Getting started with the LVM

Getting ready

How to do it...

How it works...

There's more...

See also

Adding block-level replication

Getting ready

How to do it...

How it works...

See also

Incorporating the second LVM layer

Getting ready

How to do it...

How it works...

There's more...

See also

Verifying a DRBD filesystem

Getting ready

How to do it...

How it works...

There's more...

See also

Correcting a DRBD split brain

Getting ready

How to do it...

How it works...

See also

Formatting an XFS filesystem

Getting ready

How to do it...

How it works...

See also

Tweaking XFS performance

Getting ready

How to do it...

How it works...

There's more...

See also

Maintaining an XFS filesystem

Getting ready

How to do it...

How it works...

There's more...

See also

Using LVM snapshots

Getting ready

How to do it...

How it works...

See also

Switching live stack systems

Getting ready

How to do it...

How it works...

There's more...

Detaching a problematic node

Getting ready

How to do it...

How it works...

There's more...

See also

High Availability via Pacemaker

Before we begin...

Installing the components

Getting ready

How to do it...

How it works...

There's more...

See also

Configuring Corosync

Getting ready

How to do it...

How it works...

See also

Preparing start up services

Getting ready

How to do it...

How it works...

There's more...

See also

Starting with base options

Getting ready

How to do it...

How it works...

There's more...

See also

Adding DRBD to cluster management

Getting ready

How to do it...

How it works...

There's more...

See also

Adding LVM to cluster management

Getting ready

How to do it...

How it works...

There's more...

Adding XFS to cluster management

Getting ready

How to do it...

How it works...

Adding PostgreSQL to cluster management

Getting ready

How to do it...

How it works...

There's more...

Adding a virtual IP to proxy the cluster

Getting ready

How to do it...

How it works...

Adding an email alert

Getting ready

How to do it...

How it works...

Grouping associated resources

Getting ready

How to do it...

How it works...

Combining and ordering related actions

Getting ready

How to do it...

How it works...

Performing a managed resource migration

Getting ready

How to do it...

How it works...

There's more...

Using an outage to test migration

Getting ready

How to do it...

How it works...

There's more...

High Availability with Multi-Master Replication

Overview of multi-master

Deciding whether multi-master is right for you

Getting ready

How to do it...

How it works...

See also

Obtaining and installing BDR

Getting ready

How to do it...

How it works...

There's more...

Starting with a single BDR node

Getting ready

How to do it...

How it works...

There's more...

Creating an additional BDR node

Getting ready

How to do it...

How it works...

There's more...

Testing DDL replication on each node

Getting ready

How to do it...

How it works...

There's more...

Using sequences safely

Getting ready

How to do it...

How it works...

There's more...

Global allocation sequences

Timeshard introspection

See also

Configuring HAProxy for the multi-master approach

Getting ready

How to do it...

How it works...

There's more...

See also

Combining PgBouncer with HAProxy

Getting ready

How to do it...

How it works...

Performing a managed node switchover

Getting ready

How to do it...

How it works...

There's more...

See also

Improving failover speed

Getting ready

How to do it...

How it works...

Performing a major version upgrade online

Getting ready

How to do it...

How it works...

There's more...

Data Distribution

Identifying horizontal candidates

Getting ready

How to do it...

How it works...

There's more...

See also

Setting up a foreign PostgreSQL server

Getting ready

How to do it...

How it works...

There's more...

Altering foreign servers

Dropping foreign servers

See also

Mapping a remote user

Getting ready

How to do it...

How it works...

There's more...

See also

Creating a foreign table

Getting ready

How to do it...

How it works...

There's more...

Creating all tables for a foreign schema

Dropping foreign tables

See also

Using a foreign table in a query

Getting ready

How to do it...

How it works...

There's more...

Explaining strange planner decisions

Improvements in PostgreSQL 9.6

Improvements in PostgreSQL 10

Improvements in PostgreSQL 11

Optimizing foreign table access

Getting ready

How to do it...

How it works...

There's more...

Transforming foreign tables into local tables

Getting ready

How to do it...

How it works...

There's more...

See also

Creating a scalable nextval replacement

Getting ready

How to do it...

How it works...

There's more...

Building a sharding API

Getting ready

How to do it...

How it works...

There's more...

shard_manager extension

Citus

Postgres-XL

BDR AutoScale

See also

Talking to the correct shard

Getting ready

How to do it...

How it works...

There's more...

Keeping things fast by creating a cache

Choosing an application data to map logical shard 

Moving a shard to another server

Getting ready

How to do it...

How it works...

There's more...

Zero-downtime Upgrades

Preparing upgrade requirements

Getting ready

How to do it…

How it works…

Remembering PgBouncer and pglogical

Getting ready

How to do it…

How it works…

There's more…

See also

Creating a publication set

Getting ready

How to do it…

How it works…

There's more…

Handling sequences

Getting ready

How to do it…

How it works…

There's more…

Bootstrapping the target cluster

Getting ready

How to do it…

How it works…

There's more…

Starting the subscription

Getting ready

How to do it…

How it works…

There's more…

Monitoring progress

Getting ready

How to do it…

How it works…

There's more…

See also

Switching targets

Getting ready

How to do it…

How it works…

There's more…

Cleaning everything up

Getting ready

How to do it…

How it works…

Other Books You May Enjoy

Leave a review - let other readers know what you think

Preface

Database outages are costly and largely avoidable. This book will help you design and build an indestructible PostgreSQL 12 cluster that can remain online even in the most dire circumstances.

From cluster layout and hardware selection to software stacks and horizontal scalability, this book will help you build a versatile PostgreSQL cluster that will survive crashes, resist data corruption, and grow smoothly with customer demand. Configuration, troubleshooting, monitoring and alerting, backups, replication, decoupling through proxies, failover automation, horizontal scaling, upgrades, and several other considerations are all part of a healthy PostgreSQL cluster. By the end of this book, you will have learned all the new features of PostgreSQL 12 to help you build a more efficient and adaptive database.

Who this book is for

PostgreSQL administrators and developers who wish to build and maintain a highly reliable PostgreSQL cluster will find this book to be an extremely useful resource. 

What this book covers

Chapter 1, Architectural Considerations, explains how the layout of the nodes in our PostgreSQL cluster can drastically influence its availability.

Chapter 2, Hardware Planning, explains the selection and provisioning of hardware required to build a highly available PostgreSQL database.

Chapter 3, Minimizing Downtime, explains how we should react when outages inevitably occur and how to prepare ourselves for them.

Chapter 4, Proxy and Pooling Resources, explains how to combine and abstract connectivity to isolate and protect the database.

Chapter 5, Troubleshooting, covers several techniques to track sources of poor performance or stop potential outages before they occur.

Chapter 6, Monitoring, focuses on what we should monitor, how often we should check system status, and how to present the data for easy consumption. We will learn how to effectively monitor PostgreSQL's server status and database performance.

Chapter 7, PostgreSQL Replication, shows how to utilize PostgreSQL replication as well as third-party table synchronization tools. We will learn several methods to copy entire databases or individual tables.

Chapter 8, Backup Management, shows where to turn when backing up large PostgreSQL clusters becomes a concern. This chapter will help ensure backups not only fulfill their stated role but are also reliable, fast, and efficient.

Chapter 9, High Availability with repmgr, discusses all the capabilities so that we can construct a fully automated high-availability stack. This chapter is dedicated to building a fully automated high-availability stack using the repmgr replica and cluster management tools by 2ndQuadrant.

Chapter 10, High Availability with Patroni, explains how to build a quick, yet adaptable, high-availability stack to keep our PostgreSQL servers online.

Chapter 11, Low-Level Server Mirroring, shows how to build and manipulate a fault-tolerant, high-performance foundation for our PostgreSQL clusters.

Chapter 12, High Availability via Pacemaker, shows how to automate cluster management and ensure high availability. This chapter covers Corosync and Pacemaker and the steps to manage dual-node servers with this software.

Chapter 13, High Availability with Multi-Master Replication, explains how multiple writable PostgreSQL nodes affect the concept of high availability. This chapter explores the inherent limitations of multi-master database technology as applied to PostgreSQL, as well as how to best utilize it to maximize application availability.

Chapter 14, Data Distribution, shows how clever data management can increase uptime even further. 

Chapter 15, Zero Downtime Upgrades, explains how to upgrade a cluster while remaining fully online.

To get the most out of this book

This book concentrates on Unix systems with a focus on Linux in particular. Such servers have become increasingly popular for hosting databases for large and small companies. As such, we highly recommend that you use a virtual machine or development system running a recent copy of Debian, Ubuntu, Red Hat Enterprise Linux, or a variant such as CentOS or Scientific Linux.

You will also need a copy of PostgreSQL. If your chosen Linux distribution isn't keeping the included PostgreSQL packages sufficiently up to date, the PostgreSQL website maintains binaries for most popular distributions. You can find them at https://www.postgresql.org/download/.

Users of Red Hat Enterprise Linux and its variants should refer to the following URL to add the official PostgreSQL YUM repository to important database systems: https://yum.postgresql.org/repopackages.php.

Users of Debian, Ubuntu, Mint, and other related Linux systems should refer to the PostgreSQL APT wiki page at this URL instead: https://wiki.postgresql.org/wiki/Apt.

Be sure to include any contrib packages in your installation. They include helpful utilities and database extensions that we will use in some recipes.

Users of BSD should still be able to follow along with these recipes. Some commands may require slight alterations to run properly on BSD. Otherwise, all commands have been confirmed to work on Bash and recent GNU tools.

Download the color images

We also provide a PDF file that has color images of the screenshots/diagrams used in this book. You can download it here: https://static.packt-cdn.com/downloads/9781838984854_ColorImages.pdf.

Conventions used

There are a number of text conventions used throughout this book.

CodeInText: Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. Here is an example: "The Environment column has another goal related to physical separation."

A block of code is set as follows:

[global]repo1-host=pg-primaryrepo1-host-user=postgresrepo1-path=/var/lib/pgbackrestrepo1-retention-full=1start-fast=y

Any command-line input or output is written as follows:

sudo yum install pgbackrest

Bold: Indicates a new term, an important word, or words that you see onscreen. For example, words in menus or dialog boxes appear in the text like this. Here is an example: "Select System info from the Administration panel."

Warnings or important notes appear like this.
Tips and tricks appear like this.

Sections

In this book, you will find several headings that appear frequently (Getting ready, How to do it..., How it works..., There's more..., and See also).

To give clear instructions on how to complete a recipe, use these sections as follows:

Getting ready

This section tells you what to expect in the recipe and describes how to set up any software or any preliminary settings required for the recipe.

How to do it…

This section contains the steps required to follow the recipe.

How it works…

This section usually consists of a detailed explanation of what happened in the previous section.

There's more…

This section consists of additional information about the recipe in order to make you more knowledgeable about the recipe.

See also

This section provides helpful links to other useful information for the recipe.

Get in touch

Feedback from our readers is always welcome.

General feedback: If you have questions about any aspect of this book, mention the book title in the subject of your message and email us at [email protected].

Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packtpub.com/support/errata, selecting your book, clicking on the Errata Submission Form link, and entering the details.

Piracy: If you come across any illegal copies of our works in any form on the Internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.

If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.

Reviews

Please leave a review. Once you have read and used this book, why not leave a review on the site that you purchased it from? Potential readers can then see and use your unbiased opinion to make purchase decisions, we at Packt can understand what you think about our products, and our authors can see your feedback on their book. Thank you!

For more information about Packt, please visit packt.com.

Architectural Considerations

In many ways, database server architecture is treated as a mere afterthought. It's often much easier to simply create a single node, install some software, and consider the whole affair resolved. If a company is particularly paranoid, they may even spare some thought for a replica server, or perhaps some kind of backup.

The true importance of database cluster architecture is easily overlooked as a result. But what is server architecture? Why does it matter?

Look down the street. Any street is fine. What do you see? Homes, offices, and buildings of various descriptions. With very rare exceptions, each one of these was meticulously planned, from the foundation to the walls to the electrical wires, pipes, up to the roof and drainage systems. A failure in any of these components could lead to the ultimate demise of the entire structure, given enough time.

The same also applies to a PostgreSQL cluster! Database architecture defines what goes into a database server cluster, and the reason for each element. How does it communicate? How many nodes are required? Where do we put those nodes, and why? What common problems are inherent in those decisions? How will our decisions influence the underlying cost? What trade-offs can we make, given some important constraints? How does all of this affect data availability? We need those answers before we even consider hardware or virtualization. There are many important considerations we must entertain when designing a highly available PostgreSQL cluster.

Why then is it so common for critical application and user data that drives the entire application stack behind the company itself to be treated so callously? We direct so much attention and focus on the application, with its various layers of indirection, queues, caches, container automation, and microarchitecture, that the data layer is overlooked or considered a nuisance.

This is actually highly understandable. In most cases, a PostgreSQL database layer demands an entirely different approach that development, system administration, and other information technology fields may not be entirely familiar with managing. Even experienced database administrators may not comprehend the scale and necessary theoretical concepts that drive the high availability of databases.

While we can't reduce the subtle art of database server architecture to a few memorable quips sure to entertain at parties, we can make the subject far more approachable. It shouldn't be necessary to have a Ph.D. in abstract theoretical frameworks to prevent a costly database outage.

In this chapter, we will learn how the layout of the nodes in our PostgreSQL cluster can drastically influence its availability. We will cover the following recipes:

Setting expectations with RPO

Defining timetables through RTO

Picking redundant copies

Selecting locations

Having enough backups

Considering quorum

Introducing indirection

Preventing split brain

Incorporating multi-master

Leveraging multi-master

Setting expectations with RPO

RPO is a common term in business continuity known as Recovery Point Objective. In the context of a database system, it describes the amount of data that may be lost following an unexpected outage before it is once again operational. It's important to understand this at an early stage because it will drive decisions such as node count, data synchronization methods, and backup technologies.

In this recipe, we will examine the ingredients for concocting a comprehensive RPO that will influence the PostgreSQL cluster composition itself.

Getting ready

The first thing we need to do is set expectations. These are most often defined by upper management or some other decision-making entity. Data loss is never desirable but is unavoidable in catastrophic scenarios. How much data loss can the business tolerate under these circumstances? Seconds, minutes, or hours' worth?

This recipe will mainly focus on information gathering from key individuals, so make sure it's possible to at least email anyone involved with the application stack. Hardware purchases depend on budget proposals, so it may even be necessary to interact with VP and C-level executives as well. Even if we don't do this right away, try to determine the extent of influence available to you.

How to do it...

Since we're dealing with many vectors, we should iterate them if possible. Try to follow a process like this:

Seek the input of major decision makers:

VP and C-level executives involved with technology

Product manager

Application designers and architects

Infrastructure team lead

Find an amount of time that will satisfy most or all of the above.

Follow the rest of the advice in this chapter to find a suitable architecture.

Try to determine a rough cost for this and the closest alternative.

Present one or more designs and cost estimates to decision makers.

Document the final RPO decision and architecture as reference material.

How it works...

Decision makers such as the technology VP, CEO, CTO, and such are the final word in most cases. Their input is vital and should be considered a requirement before ever taking a step further. Keep in mind that these people are likely not familiar with the technical feasibility of their demands at this extreme implementation level. When asked a question such as How much data can we lose in a major outage? they're probably going to say None! Regardless, this is a vital first step for reasons that will shortly become apparent.

Then, we simply traverse the stack of people who helped define the features the application stack fulfills, those who designed and implemented it, and whoever may be in charge of the requisite hardware and network where everything runs. Perhaps the design has a built-in tolerance for certain amounts of loss. Perhaps inherent queues or caches act as a sort of buffer for data backend difficulties. Maybe the design assumes there are multiple data systems all ingesting the same stream for redundancy. The architecture and those who built it are the best sources of this information.

Once we know the maximum amount of data the backend can lose before being restored, we must apply what we learn from the rest of this chapter and choose one or two best-case designs that can deliver that promise. The point here is that we will be executing this recipe several times until everyone agrees to all inherent design costs and limitations before continuing.

The best way to estimate cost is to take the chosen database server architectures and iterate a gross cost for each element. The next chapter on Hardware Planning describes in detail how to do this. We don't have to be exact here; the goal is to have some numbers we can present to decision makers. Do they still want zero RPO if it costs 10x as much as ten seconds of data loss? Are they willing to compromise on a hybrid design?

Once we have chosen a final structure, possibly the most important step is to produce a document describing that architecture, why it was chosen, the known limitations, and the RPO it delivers. Present this document to decision makers and encourage them to sign it if possible. Save it in any corporate documentation management system available, and make sure it's one of the first things people see regarding the database cluster layer. This document will single-handedly answer multiple questions about the capabilities of the database cluster, all while acting as a reference specification.

There's more...

RPO is considered a vital part of business continuity planning. Entire books have been written on this subject, and what we've presented here is essentially a functional summary. The subject is deep and varied, rich with its own inherent techniques beyond simply architecture and design. It is the language of business and resource management, so it can be a key component when interacting with decision makers.

Learning these concepts in depth can help influence the overall application stack to a more sustainable long-term structure. We'll cover more of these techniques in this chapter, but don't be afraid to proactively incorporate these techniques into your repertoire.

Defining timetables through RTO

Like RPO, RTO refers to a common business continuity term known as Recovery Time Objective. In practice, this is the amount of time an outage of the database layer may last. Often, it is incorporated into a Service Level Agreement (SLA) contract presented to clients or assumed as a metric within the application stack. Like RPO, this is a contractual-level element that can determine the number of required nodes at steadily increasing expense as the amount of tolerable downtime decreases.

In this recipe, we will examine the necessary steps to defining a realistic RTO, and what that could mean given known industry standards.

Getting ready

As with RPO, our goal in determining a functional RTO is to set expectations regarding inherent architecture limitations. The primary difference here is that RTO is more easily quantifiable. Fire up your favorite spreadsheet program, such as OpenOffice, Microsoft Excel, or Google Sheets; we'll be using it to keep track of how much time each layer of the application, including the database layer contributes to a potential outage scenario.

How to do it...

We simply need to produce a spreadsheet to track all of the elements of known RTO that depend on the database. We can do this with the following steps:

Locate an already-defined RTO SLA for each portion of the application dependent on PostgreSQL if possible.

If this does not exist, seek the input of major decision makers:

VP and C-level executives involved with technology

Product manager

Application designers and architects

Infrastructure team lead

Find an amount of time that will satisfy most or all of the above.

Create a new spreadsheet for RTO.

Create a heading row with the following columns:

Activity

Time (seconds)

Count

Total (seconds)

In the

Total

column, create the following formula:

=B2*C2

Create one row for each type of the following

Activity

categories:

Minor Upgrade

Major Upgrade

Reboot

Switchover

Failover

OS Upgrade

Etc.

Copy and paste the formula into the

Total

column for all the rows we created.

At the bottom of the

Total

column, after all relevant rows (row 21, for example), create the following formula:

=SUM(D2:D20)

Ensure that the end result looks something like the following screenshot:

Follow the rest of the advice in this chapter to find a suitable architecture.

Try to determine a rough cost for this and the closest alternative(s).

Present the design and cost estimates to decision makers.

Document this final RTO decision and architecture as reference material.

How it works...

In order to see where our PostgreSQL cluster fits company expectations, we need to know whether the company and each individual part of the existing application stack has an overall target RTO. If it doesn't, it's our job to approximate one. This means contacting any decision-makers, product owners, architects, and so on, to know what RTO target we're trying to attain and how other resources may contribute. These will act as a type of maximum value we can't exceed.

Keep in mind that RTO values tend to be amplified between layers. If our RTO is higher than some portion of the application stack, that will necessarily raise the RTO of that layer as well, which may increase the RTO of each subsequent layer. This is the exact scenario we're trying to avoid.

Once we have an RTO expectation, we need to examine how possible it is to fall under that target. The easiest way to accomplish this is to build a spreadsheet that essentially consists of a list of dependencies, maintenance tasks, or other occurrences related to PostgreSQL.

The rows we used for Activity are mainly suggestions, and producing an exhaustive list is generally dependent on the architecture to a certain extent. However, all software requires upgrades, machines need to be rebooted, switchover tests to prove high availability functionality may be required, past experience with the full application stack and hardware may imply two unexpected outages per year, and so on. Each of these will contribute to the cumulative RTO for PostgreSQL which we can use as a reference value.

The number we use for the Count column should be the number of times the Activity happens on a yearly basis. As an example, PostgreSQL has a quarterly release schedule for non-critical bug and security enhancements. If you want to follow along with these, it could make sense to set the Count column of Minor Upgrade to 4.

A number of architectural examples that we'll discuss later in this chapter will make it possible to set the Time column to 0 for some actions, or at least to a much lower value. We'll discuss these where relevant. This is also one of the reasons we'll need to execute this recipe multiple times when deciding on an appropriate architecture.

Once we have accounted for as many foreseeable Action components that may be necessary over the course of a year, we'll have a cumulative total that may represent the RTO that PostgreSQL can achieve for a given architecture. As a sanity check, we should compare that value to the lowest RTO for any parts of the application stack that depend on PostgreSQL. It's important we don't exceed this target.

Then, as with RPO, we need to present the possible RTO to decision-makers so that it can be integrated into the overall company RTO. To do that, we must continue with the rest of the chapter to find one or two architectures with either higher or lower expected RTO, estimate the cost of each, and work on a suitable compromise.

Deriving an appropriate RTO may require multiple iterations of this recipe, from estimation, architecture selection, presenting it to appropriate parties, and so on. This isn't a fast or simple process, and it pays to get it right early. We need to know how many PostgreSQL nodes to purchase, where each will reside, how we switch to alternatives, how much time each step may take, and so on.

There's more...

Besides what we discussed in the main recipe, there are other RTO concepts we would like to explore.

This may seem familiar

Believe it or not, it's very likely you've encountered this concept without even realizing it. Internet service providers or application hosts often advertise how many 9s of availability their platform can maintain. It's often presented as a chart like this:

Uptime (%)

Daily

Weekly

Monthly

Yearly

99

14m 24s

1h 40m 48s

7h 18m 18s

3d 15h 39m 30s

99.9

1m 26s

10m 5s

43m 50s

8h 45m 57s

99.99

8.6s

1m 1s

4m 23s

52m 36s

99.999

0.9s

6s

26.3s

5m 16s

As you can imagine, it's generally more desirable to stay toward the higher end of 9s to minimize downtime. On the other hand, this is highly restrictive, as Five 9s only allows just over five minutes of downtime over the course of an entire year. This doesn't leave much room for database maintenance tasks or unexpected outages at any other layer of the stack.

Node counts

Generally, the more nodes we have, the lower our RTO will be. It may make sense to start with an initial estimate spreadsheet, and then create another for each architecture or variant that seems applicable. This will make it easier to rank the monetary cost and associated RTO for each. This may influence the final decision, and hence make it easier to track what options we may have.

Picking redundant copies

How many database servers should any architecture have as part of the inherent design? There are several factors that contribute to this answer, including the design of the final architecture itself. The number of redundant data copies ultimately determines how many nodes must exist, irrespective of whether we require more data centers, irrespective of whether we should account for latency, and so on.

The goal in this recipe is to consult our needs to derive a node count that won't break the bank, but still deliver the level of availability we want. In other words, aside from our primary data node, we will explain how to figure out the number of redundant data nodes necessary to adequately represent the entire cluster safely and ensure high availability.

Getting ready

Luckily, this recipe is a simple counting exercise. The only necessary elements are a healthy imagination and perhaps some idea of the budgetary constraints before we begin. Just consider that, for any of our reference designs, we will always require more than one server node as a minimum.

How to do it...

Observe the following steps when considering node counts driven by high availability architectures:

Always add one separate server for backups.

Always allocate one server for a logical or physical replica.

For automated failover, allocate the following:

An additional small VM / node to act as a voter

OR a fully qualified replica

For every active data center beyond the first two, allocate one replica.

If non-local data access latency is a concern, allocate the following:

An additional replica in the primary location

An additional replica in each location for symmetric clusters

How it works...

Why do we demand at least one backup server? The full answer to this question actually has its own recipe in this chapter. However, catastrophic failure is a fact of life and we must be ready for such an event. Even if the separate server is not a fully operational PostgreSQL node, it must exist and should be part of the reference design.

Likewise, we must have at least one PostgreSQL replica. Some of our designs work with either physical or logical replicas, so we won't differentiate between them here. Simply assume that every highly active PostgreSQL cluster must have at least two nodes that can fulfill the role of a primary database. Backups take time to restore, whereas replicas are generally writable in a minute or less.

One replica only really covers the case where switching from one PostgreSQL node to the alternate is a manual procedure. Fully automated failure detection mechanisms require an odd number of nodes for voting purposes. This third node can either be a mere voting entity, or a full PostgreSQL replica. We cover this in greater depth in the Considering quorum recipe.

Once we start accounting for multiple geographic locations at different data centers, things don't change exceptionally. By now, we have at least one PostgreSQL replica that is probably at the first alternate location. If we have three or more active data centers where the application is using PostgreSQL, we'll want a local replica for each.

Then, consider the implications of limiting ourselves to merely one PostgreSQL node per location. This means any minor upgrade or other maintenance task will mean switching to an alternate data center while the maintenance is active. This can introduce unwanted latency that will affect the application. To reduce this, add one replica to the primary location to account for this effect. For symmetrical data centers that have no primary location, add a PostgreSQL replica to each location for the same reasons.

As a quick example, consider two scenarios. Our first company (Company A) only uses two data centers, and doesn't need automated database failover, nor is it necessary to worry about downtime caused by minor upgrades. In this case, they decided to use two PostgreSQL servers and a backup system. This is a minimum of three nodes related to PostgreSQL, and their cluster looks like this:

Company A

In the second case (Company B), we have a much more demanding financial institution that requires all three of their data centers to be active at all times. They chose to have one Primary PostgreSQL server, two Replicas per data center, a Witness node, and a Backup server. In that extreme case, they used a total of eight nodes dedicated to PostgreSQL.

Their cluster would look like this:

Company B
Nodes denoted here are specifically meant to be running PostgreSQL. Architectures discussed in this chapter will include various abstraction layers and, as a result, will likely require even more nodes in the cluster. PostgreSQL nodes tend to be much larger and do far more work, so are the focus of this recipe.

There's more...

Chosen node counts will naturally suggest certain architectures over others. The reason we're performing this recipe so early is to get an idea of compatible architectures. It may be necessary to revise these counts once we learn information from other recipes in this chapter.

Selecting locations

Once we've decided how many PostgreSQL nodes to allocate in our cluster, where should we put them? Generally, this is easy to answer, but there are some subtleties we need to consider as well. A truly high availability cluster can resist many different types of failure, including where the servers themselves reside.

In this recipe, we will learn all about the ways separate geographical locations can affect our chosen design.

Getting ready

It's time to start drawing diagrams. Find your favorite drawing program, such as Visio, Dia, or Draw.io, or a convenient sheet of paper or whiteboard.

Keep in mind that the ideal design may require more data centers than the company currently utilizes. In these cases, it may be possible to supply sufficient justification to contract at least one more location if it benefits the RPO or RTO. Hence, we recommend following the Setting expectations with RPO recipe and Defining timetables through RTO recipe before continuing here.

How to do it...

Consider these basic guidelines while thinking about how many data centers are necessary, and which nodes should be placed in each:

If data must be available in case of a site outage, use one additional location.

Always place the backup in a separate location if possible.

If two locations are in the same general geographical area, use one additional location at least 100 miles (160 km) away.