For a tiny database, pg_dump need not run in parallel using multiple connections. But, when there are databases of several GBs or TBs in size, you may wish to run pg_dump using more parallel jobs. If pg_dump can run in parallel using more CPUs without utilizing all the available server resources, it can complete faster.
Multiple processes are spawned when pg_dump is invoked in parallel mode. For that reason, it is not wise to allocate all the available CPUs on the database server to pg_dump, if the database server is serving any traffic.
In this recipe, we shall discuss how a large database's backup and restore can be performed faster using parallel features in pg_dump.
Getting ready
In order to use pg_dump either locally or remotely, the postgresql-13 package must be installed on the server from the pg_dump being performed.
Additionally, it is better to observe the CPU load and allocate the safest number of parallel processes...