Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
PostgreSQL 10 High Performance
PostgreSQL 10 High Performance

PostgreSQL 10 High Performance: Expert techniques for query optimization, high availability, and efficient database maintenance , Third Edition

eBook
$27.98 $39.99
Paperback
$48.99
Subscription
Free Trial
Renews at $19.99p/m

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Table of content icon View table of contents Preview book icon Preview Book

PostgreSQL 10 High Performance

PostgreSQL Versions

PostgreSQL certainly has a reputation. It's known for having a rich feature set and very stable software releases. The secure stance that its default configuration takes is simultaneously praised by security fans and criticized for its learning curve. The SQL-specification conformance and data integrity features allow only the strictest ways to interact with the database, which is surprising to those who come from a background working with looser desktop database software. All of these points have an element of truth to them.

Another part of PostgreSQL's reputation is that it's slow. This, too, has some truth to it, even today. There are many database operations where the right thing takes longer to do than the alternative. As the simplest example of this, consider the date February 29, 2009. With no leap year in 2009, that date is only valid as an abstract one. It's not possible for this to be the real date of something that happened. If you ask the database to store this value into a standard date field, it can just do that, the fast approach. Alternatively, it can check whether that date is valid to store into the destination field, note that there is no such date in a regular calendar, and reject your change. That's always going to be slower. PostgreSQL is designed by, and intended for, the sort of people who don't like cutting corners just to make things faster or easier, and in cases where the only way you can properly handle something takes a while, that may be the only option available.

However, once you have a correct implementation of something, you can then go back and optimize it. That's the mode PostgreSQL has been in for the last few years. PostgreSQL usually rises above these smaller issues to give excellent database performance. Parts of it have the sort of great design that outperforms simpler approaches, even after paying the overhead that complexity can introduce. This is a fairly recent phenomenon though, which explains quite a bit about the perception that PostgreSQL is a slower database than its competitors. In this chapter, we will cover the following topics:

  • Performance of historical PostgreSQL releases
  • PostgreSQL or another database?
  • PostgreSQL tools
  • PostgreSQL application scaling life cycle
  • Performance tuning as a practice

Performance of historical PostgreSQL releases

In November 2005, PostgreSQL 8.1 was released. It included a number of internal architectural changes, some of which aimed to improve how fast the database would run on a multiprocessor system with many active clients. The result was a major improvement in the ability of the database to scale upwards to handle a heavy load. Benchmarks on modern hardware really highlight just how far that version leapfrogged earlier ones. You can find an excellent performance comparison of versions 8.0 through 8.4 from György Vilmos at http://suckit.blog.hu/2009/09/29/postgresql_history. This shows exactly how dramatic these improvements have been.

This test gives a transactions per second (TPS) figure that measures the total system speed, and you can run it in either a read-only mode or one that includes writes. The read-only performance improved by over four times from 8.0 to 8.1 and more than doubled again by 8.3:

Version

Peak read-only TPS

# of clients at peak

8.0.21

1256

4

8.1.17

5620

14

8.2.13

8109

18

8.3.7

13984

22

8.4.1

13546

22

The rise in the number of clients at the peak load gives us an idea of how well the database internals handle access to shared resources. The area 8.1 in particular included a significant upgrade. Performance improved similarly on the write side, with almost an 8 times gain between 8.0 and 8.3:

Version

Peak write TPS

# of clients at peak

8.0.21

361

2

8.1.17

873

10

8.2.13

1358

14

8.3.7

2795

18

8.4.1

2713

12

The small decrease in performance from 8.3 to 8.4 in both these tests is due to some subtle retuning of the database to improve its worst-case performance. More statistics are collected in 8.4 to improve complicated queries, at the expense of slightly slowing the sort of trivial ones tested here.

These improvements have been confirmed by other benchmarking results, albeit normally not covering such a wide range of versions. It's easy to see that any conclusion about PostgreSQL performance reached before late 2005, when 8.1 shipped, is completely out of date at this point. The speed improvement in 2008's 8.3 release was an additional large leap. Versions before 8.3 are not representative of the current performance and there are other reasons to prefer using that one or a later one too.

Choosing a version to deploy

Because of these dramatic gains, if you have an older PostgreSQL system you'd like to make faster, the very first thing you should ask yourself is not how to tweak its settings, but instead if it's possible to upgrade to a newer version. If you're starting a new project, 8.3 is the earliest version you should consider. In addition to the performance improvements, there were some changes to that version that impact application coding that you'd be better off to start with to avoid needing to retrofit later.

Chapter 16, Avoiding Common Problems, includes a reference guide to what performance-related features were added to each major version of PostgreSQL from 8.1 through 10.0. You might discover that one of the features only available in a very recent version is compelling to you, and therefore you have a strong preference to use that one. Many of these version-specific changes are also highlighted throughout the book.

Upgrading to a newer major version

Until very recently, the only way to upgrade an existing PostgreSQL version to a newer major version, such as going from 8.1.X to 8.2.X, was to dump and reload. The and/or programs are used to write the entire content of the database to a file, using the newer versions of those programs. That way, if any changes need to be made to upgrade, the newer dumping program can try to handle them. Not all upgrade changes will happen automatically though. Then, depending on the format you dumped in, you can either restore that just by running the script it generates or use the program to handle that task. pg_restore can be a much better alternative in newer PostgreSQL versions that include a version with parallel restore capabilities.

If you are using a system that doesn't easily allow you to run more than one system with PostgreSQL version at a time, such as the current RedHat Linux RPM packages, getting both old and new versions of PostgreSQL installed on your system at the same time can be difficult. There are some changes to improve this situation under development for PostgreSQL 9.0 and 10.0. Make sure to check the feasibility of running more than one version at once as part of planning an upgrade.

Dumping can take a while, and restoring can take even longer. While this is going on, your database likely needs to be down, so that you don't allow any changes that won't then be migrated over by the dump. For large databases, this downtime can be both large and unacceptable.

The most demanding sites prefer near-zero downtime, to run 24/7. There, a dump and reload is never an acceptable option. Until recently, the only real approach available for doing PostgreSQL upgrades in those environments has been using statement replication to do so. Slony is the most popular tool for that, and more information about it is available in Chapter 14, Scaling with Replication. One of Slony's features is that you don't have to be running the same version of PostgreSQL on all the nodes you are replicating to. You can bring up a new node running a newer PostgreSQL version, wait for replication to complete, and then switch over once it matches the original.

Another tool used for the asynchronous primary/secondary replication is Londiste from SkyTools. One of the benefits of Londiste over the streaming replication that’s in the core of PostgreSQL is that Londiste can replicate a single database or a table from a database. Streaming replication will create an exact copy of the database server. Londiste provides more granularity for replication which makes it ideal for our migration. It allows us to move databases from several servers to one unified server.

Now, there is another way available that works without needing any replication software. A program originally called pg_migrator is capable of upgrading from 8.3 to 8.4 without the dump and reload. This process is called in-place upgrading. You need to test this carefully, and there are both known limitations and likely still unknown ones related to less popular PostgreSQL features. Be sure to read the documentation of the upgrade tool very carefully. Starting in PostgreSQL 10.0, this module is included with the core database, with the name changed to pg_upgrade. pg_upgrade is a native PostgreSQL command and must be offline. While all in-place upgrades have some risk and need careful testing, in many cases, these will take you from 8.3 or 8.4 to 10.0 and hopefully beyond.

The PostgreSQL development community is now moving to an online replication approach, for example the pg_logical extension for PostgreSQL providing much faster replication than Slony, Bucardo or Londiste, as well as cross-version upgrades

Upgrades to PostgreSQL 8.3+ from earlier ones

The major internal changes to 8.3 make it impossible to upgrade from any earlier version past it without dumping the entire database and reloading it into the later one. This makes 8.3 a doubly important version milestone to cross. Not only is it much faster than 8.2, once your data is in 8.3, you can perform in-place upgrades from there.

Going from an earlier version to PostgreSQL 8.3 or later can be a difficult change. Some older applications rely on non-character data types being transparently cast to the type, a behavior removed from 8.3 for a variety of reasons. For details, see http://www.postgresql.org/docs/8.3/static/release-8-3.html.

While there's always a chance that upgrading your database version can introduce new issues, it is particularly likely that applications written against an earlier version will need to be updated to work against 8.3 or later. It is possible to work around this issue by manually adding back the automatic typecasting features that were removed. However, fixing the behavior in your application instead is a more robust and sustainable solution to the problem. The old behavior was eliminated because it caused subtle application issues. If you just add it back, you'll both be exposed to those and need to continue doing this extra cost additional step with every new PostgreSQL release. There is more information available at https://www.endpoint.com/blog/2010/01 on this topic and on the general challenges of doing a major PostgreSQL upgrade.

Minor version upgrades

A dump/reload, or the use of tools such as pg_upgrade, is not needed for minor version updates, for example, going from 8.4.1 to 8.4.2. These simply require stopping the server, installing the new version, and then running the newer database binary against the existing server data files. Some people avoid ever doing such upgrades once their application is running for fear that a change in the database will cause a problem. This should never be the case for PostgreSQL.

The policy of the PostgreSQL project described at http://www.postgresql.org/support/versioning states very clearly:
While upgrades always have some risk, PostgreSQL minor releases fix only frequently-encountered security and data corruption bugs to reduce the risk of upgrading.

You should never find an unexpected change that breaks an application in a minor PostgreSQL upgrade. Bug, security, and corruption fixes are always done in a way that minimizes the odds of introducing an externally visible behavior change, and if that's not possible, the reason why and the suggested workarounds will be detailed in the release notes. What you will find is that some subtle problems, resulting from resolved bugs, can clear up even after a minor version update. It's not uncommon to discover that the reporting of a problem to one of the PostgreSQL mailing lists is resolved in the latest minor version update compatible with that installation, and upgrading to that version is all that's needed to make the issue go away.

Migrating from PostgreSQL 9.x to 10.x – a new way to work

Starting from version 9, it is possible to migrate a complete cluster (users and databases) using pg_upgrade. It is useful to migrate from a minor version to a major version, for example from PostgreSQl 9.6 to PostgreSQL 10. This way to work is safe and faster than dump/restore, because pg_upgrade migrates PostgreSQL pages in a binary way and it's not necessary rebuild any indexes.

As mentioned above, another approach may be to use pglogical, pglogical is a logical replication system implemented entirely as a PostgreSQL extension. Fully integrated, it requires no triggers or external programs. This alternative to physical replication is a highly efficient method of replicating data using a publish/subscribe model for selective replication. Using pglogical we can migrate and upgrade PostgreSQL with almost zero downtime

PostgreSQL or another database?

There are certainly situations where other database solutions will perform better. For example, PostgreSQL is missing features needed to perform well on some of the more difficult queries in the TPC-H test suite (see Chapter 8, Database Benchmarking, for more details). It's correspondingly less suitable for running large data warehouse applications than many of the commercial databases. If you need queries along the lines of some of the very heavy ones TPC-H includes, you may find that databases such as Oracle, DB2, and SQL Server still have a performance advantage worth paying for. There are also several PostgreSQL-derived databases that include features making them more appropriate for data warehouses and similar larger systems. Examples include Greenplum, Aster Data, and Netezza.

For some types of web applications, you can only get acceptable performance by cutting corners on the data integrity features in ways that PostgreSQL just won't allow. These applications might be better served by a less strict database, such as MySQL or even a really minimal one, such as SQLite. Unlike the fairly mature data warehouse market, the design of this type of application is still moving around quite a bit. Work on approaches using the key/value-based NoSQL approach, including CouchDB, MongoDB, and Cassandra, are all becoming more popular at the time of writing this. All of them can easily outperform a traditional database, provided you have no need to run the sort of advanced queries that key/value stores are slower at handling. PostgreSQL also natively supports and indexes the Json data type for a NoSQL data approach.

PostgreSQL 10.x and NoSQL

Starting from version 9.4, PostgreSQL has the jsonb field and it can be used as a NoSQL system. jsonb fields are indexable fields, and starting from version 10.x, new operators and functions are present in PostgreSQL that allow deleting, modifying, or inserting values into jsonb values, including at specific path locations.

PostgreSQL as HUB

Starting from version 9.3, PostgreSQL has foreign data wrapper (fdw) support. With fdw, PostgreSQL can connect to many external database management system (DBMS), and it can see foreign tables (for example, MySQL or Oracle tables) as local tables. Some of the best know fdws are:

  • Oracle
  • MySQL
  • Informix
  • Firebird
  • SQLite
  • CSV files
  • Sybase
  • Microsoft SQL Server
  • MongoDB
  • Cassandra

The complete list is available at https://wiki.postgresql.org/wiki/Foreign_data_wrappers.

PostgreSQL tools

If you're used to your database vendor supplying a full tool chain with the database itself, from server management to application development, PostgreSQL may be a shock to you. Like many successful open source projects, PostgreSQL tries to stay focused on the features it's uniquely good at. This is what the development community refers to as the PostgreSQL core: the main database server, and associated utilities, that can only be developed as a part of the database itself. When new features are proposed, if it's possible for them to be built and distributed out of core, this is the preferred way to do things. This approach keeps the database core as streamlined as possible, as well as allowing those external projects to release their own updates without needing to synchronize them against the main database's release schedule.

Successful PostgreSQL deployments should recognize that a number of additional tools, each with their own specialized purpose, will need to be integrated with the database core server to build a complete system.

PostgreSQL contrib

One part of the PostgreSQL core that you may not necessarily have installed is what's called the contrib modules (it is named after the directory they are stored in). These are optional utilities shipped with the standard package, but that aren't necessarily installed by default on your system. The contrib code is maintained and distributed as part of the PostgreSQL core, but not required for the server to operate.

From a code quality perspective, the contrib modules aren't held to quite as high a standard, primarily by how they're tested. The main server includes heavy regression tests for every feature, run across a large build farm of systems that look for errors and look for greater performance and greater stability. The optional contrib modules don't get that same level of testing coverage. However, the code itself is maintained by the same development team, and some of the modules are extremely popular and well tested by users.

A list of all the contrib modules available can be found at at http://www.postgresql.org/docs/current/static/contrib.html.

Finding contrib modules on your system

One good way to check whether you have contrib modules installed is to see if the program is available. That's one of the few contrib components that installs a full program, rather than just the scripts you can use. Here's a Unix example of checking for pgbench :

$ pgbench -V
pgbench (PostgreSQL) 10.0  

If you're using an RPM or DEB packaged version of PostgreSQL, as the case would be on many Linux systems, the optional package contains all of the contrib modules and their associated installer scripts. You may have to add that package using yum, apt-get, or a similar mechanism if it wasn't installed already. On Solaris, the package is named SUNWpostgr-contrib .

If you're not sure where your system's PostgreSQL contrib modules are installed, you can use a filesystem utility to search. locate works well for this purpose on many Unix-like systems, as does the find command. The file search utilities available on the Windows Start menu will work. A sample file you could look for is pg_buffercache.sql, which will be used in the upcoming chapter Chapter 5, Memory for Database Caching, on memory allocation. Here's where that might be on some of the platforms that PostgreSQL supports:

  • RHEL and CentOS Linux systems will put the main file you need into /usr/share/pgsql/contrib/pg_buffercache.sql
  • Debian or Ubuntu Linux systems will install the file at /usr/share/postgresql/version/contrib/pg_buffercache.sql
  • Solaris installs it into /usr/share/pgsql/contrib/pg_buffercache.sql
  • The standard Windows one-click installer with the default options will always include the contrib modules, and this one will be in C:\Program Files\PostgreSQL/version/share/contrib/pg_buffercache.sql

Installing a contrib module from source

Building your own PostgreSQL from source code can be a straightforward exercise on some platforms if you have the appropriate requirements already installed on the server. Details are documented at http://www.postgresql.org/docs/current/static/install-procedure.html.

After building the main server code, you'll also need to compile contrib modules by yourself too. Here's an example of how that would work, presuming that your PostgreSQL destination is /usr/local/postgresql, and that there's a directory there named source you put the source code into (this is not intended to be a typical or recommended structure you should use):

$ cd /usr/local/postgresql/source
$ cd contrib/pg_buffercache/
$ make
$ make install
/bin/mkdir -p '/usr/local/postgresql/lib/postgresql'
/bin/mkdir -p '/usr/local/postgresql/share/postgresql/contrib'
/bin/sh ../../config/install-sh -c -m 755 pg_buffercache.so '/usr/local/postgresql/lib/postgresql/pg_buffercache.so'
/bin/sh ../../config/install-sh -c -m 644 ./uninstall_pg_buffercache.sql '/usr/local/postgresql/share/postgresql/contrib'
/bin/sh ../../config/install-sh -c -m 644 pg_buffercache.sql '/usr/local/postgresql/share/postgresql/contrib'
  

It's also possible to build and install all the contrib modules at once by running / from the directory.

Note that some of these have more extensive source code build requirements. The uuid-ossp module is an example of a more challenging one to compile yourself.

Using a contrib module

While some contrib programs such as pgbench, are directly executable, most are utilities that you install into a database in order to add extra features to them.

As an example, to install the module into a database named abc, the following command line would work (assuming the RedHat location of the file):

$ psql -d abc -f /usr/share/postgresql/contrib/pg_buffercache.sql   

You could instead use the pgAdmin III GUI management utility, which is bundled with the Windows installer for PostgreSQL, instead of the command line:

  1. Navigate to the database you want to install the module into.
  2. Click on the SQL icon in the toolbar to bring up the command editor.
  3. Choose File/Open. Navigate to C:\Program
    Files\PostgreSQL/version/share/contrib/pg_buffercache.sql and open that file.
  4. Execute using either the green arrow or Query/Execute.

You can do a quick test of the module installed on any type of system by running the following quick query:

SELECT * FROM pg_buffercache;  

If any results come back, the module was installed. Note that pg_buffercache will only be installable and usable by database superusers.

pgFoundry

The official home of many PostgreSQL-related projects is pgFoundry.

pgFoundry only hosts software for PostgreSQL, and it provides resources such as mailing lists and bug tracking, in addition to file distribution. Many of the most popular PostgreSQL add-on programs are hosted there:

  • Windows software allowing access to PostgreSQL through .NET and OLE
  • Connection poolers, such as pgpool and pgBouncer
  • Database management utilities, such as pgFouine, SkyTools, and PgTune

While sometimes maintained by the same people who work on the PostgreSQL core, pgFoundry code varies significantly in quality. One way to help spot the healthier projects is to note how regularly and recently new versions have been released.

PGXN

Another site where it is possible to find many PostgreSQL-related projects is PGXN. PGXN is more recent than pgFoundry and it is possible to find recent extensions there.

The PostgreSQL Extension Network (PGXN) is a central distribution system for open source PostgreSQL extension libraries. It consists of four basic parts:

  • PGXN Manager: An upload and distribution infrastructure for extension developers
  • PGXN API: A centralized index and API of distribution metadata
  • PGXN Search: This site is for searching extensions and perusing their documentation
  • PGXN Client: A command-line client for downloading, testing, and installing extensions

The difference between pgFoundry and PGXN is that pgFoundry is about project management and PGXN is about distribution and exposure.

Additional PostgreSQL-related software

Beyond what comes with the PostgreSQL core, the contrib modules, and software available on pgFoundry, there are plenty of other programs that will make PostgreSQL easier and more powerful. These are available from sources all over the internet. There are actually so many available that choosing the right package for a requirement can itself be overwhelming.

Some of the best programs will be highlighted throughout the book, to help provide a short list of the ones you should consider early. This approach, where you get a basic system running and then add additional components as needed, is the standard way large open source projects are built.

It can be difficult for some corporate cultures to adapt to that style, such as ones where any software installation requires everything from approval to a QA cycle. In order to improve the odds of your PostgreSQL installation being successful in such environments, it's important to start introducing this concept early on. Additional programs to add components building on the intentionally slim database core will be needed later, and not all of what's needed will be obvious at the beginning.

PostgreSQL application scaling life cycle

While every application has unique growth aspects, there are many common techniques that you'll find necessary as an application using a PostgreSQL database becomes used more heavily. The chapters of this book each focus on one of the common aspects of this process. The general path that database servers follow includes the following steps:

  1. Select hardware to run the server on. Ideally, you'll test that hardware to make sure it performs as expected too.
  2. Set up all the parts of database disk layout: RAID level, filesystem, and possibly table/index layout on disk.
  3. Optimize the server configuration.
  4. Monitor server performance and how well queries are executing.
  5. Improve queries to execute more efficiently, or add indexes to help accelerate them.
  6. As it gets more difficult to just tune the server to do more work, instead reduce the amount it has to worry about by introducing connection pooling and caching.
  7. Partition larger tables into sections. Eventually, really large ones may need to be split so that they're written to multiple servers simultaneously.

This process is by no means linear. You can expect to make multiple passes over optimizing the server parameters. It may be the case that you decide to buy newer hardware first, rather than launching into replication or partitioning work that requires application redesign work. Some designs might integrate caching into the design from the very beginning. The important thing is to be aware of the various options available and to collect enough data about what limits the system is reaching to decide which of the potential changes is most likely to help.

Performance tuning as a practice

Work on improving database performance has its own terminology, just like any other field. Here are some terms or phrases that will be used throughout the book; both of these terms will be used to refer to the current limitation that is preventing performance from getting better:

  • Running a test to determine how fast a particular operation can run. This is often done to figure out where the bottleneck of a program or system is.
  • Monitoring what parts of a program are using the most resources when running a difficult operation, such as a benchmark. This is typically to help prove where the bottleneck is, and whether it's been removed as expected after a change. Profiling a database application usually starts with monitoring tools, such as vmstat and iostat. Popular profiling tools at the code level include gprof, OProfile, and DTrace.

One of the interesting principles of performance tuning work is that, in general, you cannot figure out what bottleneck an application will next run into until you remove the current one. When presented with a system that's not as fast as someone would expect it to be, you'll often see people guessing what the current bottleneck is, or what the next one will be. That's generally a waste of time. You're always better off measuring performance, profiling the parts of the system that are slow, and using that to guess at causes and guide changes.

Let's say what you've looked at suggests that you should significantly increase shared_buffers, the primary tunable for memory used to cache database reads and writes. This normally has some positive impact, but there are potential negative things you could encounter instead. The information needed to figure out which category a new application will fall into, whether this change will increase or decrease performance, cannot be predicted from watching the server running with the smaller setting. This falls into the category of chaos theory: even a tiny change in the starting conditions can end up rippling out to a very different end condition, as the server makes millions of decisions and they can be impacted to a small degree by that change. Similarly, if is set too small, there are several other parameters that won't work as expected at all, such as those governing database checkpoints.

Since you can't predict what's going to happen most of the time, the mindset you need to adopt is one of heavy monitoring and change control.

Monitor as much as possible, from application to database server to hardware.

Introduce a small targeted change. Try to quantify what's different and be aware that some changes you have rejected as not positive won't always stay that way forever. Move the bottleneck to somewhere else, and you may discover that some parameter that didn't matter before is now suddenly the next limiting factor.

There's a popular expression on the mailing list devoted to PostgreSQL performance when people speculate about root causes without doing profiling to prove their theories: less talk, more gprof. While gprof may not be the tool of choice for every performance issue, given it's more of a code profiling tool than a general monitoring one, the idea that you measure as much as possible before speculating as to the root causes is always a sound one. You should also measure again to verify that your change did what you expected too.

Another principle that you'll find is a recurring theme in this book is that you must be systematic about investigating performance issues. Do not assume your server is fast because you bought it from a reputable vendor; benchmark the individual components yourself. Don't start your database performance testing with application level tests; run synthetic database performance tests that you can compare against other people's first. That way, when you run into the inevitable application slowdown, you'll already know your hardware is operating as expected and that the database itself is running well. Once your system goes into production, some of the basic things you might need to do in order to find a performance problem, such as testing hardware speed, become impossible to take the system down.

You'll be in much better shape if every server you deploy is tested with a common methodology, which is exactly what later chapters here lead you through. Just because you're not a hardware guy, it doesn't mean you should skip over the parts here that cover things such as testing your disk performance. You need to perform work like that as often as possible when exposed to new systems—that's the only way to get a basic feel of whether something is operated within the standard range of behavior or if instead there's something wrong.

Summary

PostgreSQL has come a long way in the last five years. After building solid database fundamentals, the many developers adding features across the globe have made significant strides in adding both new features and performance improvements in recent releases. The features added to the latest PostgreSQL, 10.0, making replication and read scaling easier than ever before, are expected to further accelerate the types of applications the database is appropriate for.

The extensive performance improvements in PostgreSQL 9.x and 10.x in particular shatter some earlier notions that the database server was slower than its main competitors.

There are still some situations where PostgreSQL's feature set results in slower query processing than some of the commercial databases it might otherwise displace.

If you're starting a new project using PostgreSQL, use the latest version possible (your preference really should be to deploy version 8.3 or later).

PostgreSQL works well in many common database applications, but certainly there are applications it's not the best choice for.

Not everything you need to manage and optimize a PostgreSQL server will be included in a basic install. Be prepared to include an additional number of utilities that add features outside of what the core database aims to provide.

Performance tuning is best approached as a systematic, carefully measured practice.

In the following chapter, we will discuss the hardware best-suited for the
PostgreSQL server.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Obtain optimal PostgreSQL 10 database performance, ranging from initial design to routine maintenance
  • Fine tune the performance of your queries and avoid the common pitfalls that can slow your system down
  • Contains tips and tricks on scaling successful database installations, and ensuring a highly available PostgreSQL solution

Description

PostgreSQL database servers have a common set of problems that they encounter as their usage gets heavier and requirements get more demanding. Peek into the future of your PostgreSQL 10 database's problems today. Know the warning signs to look for and how to avoid the most common issues before they even happen. Surprisingly, most PostgreSQL database applications evolve in the same way—choose the right hardware, tune the operating system and server memory use, optimize queries against the database and CPUs with the right indexes, and monitor every layer, from hardware to queries, using tools from inside and outside PostgreSQL. Also, using monitoring insight, PostgreSQL database applications continuously rework the design and configuration. On reaching the limits of a single server, they break things up; connection pooling, caching, partitioning, replication, and parallel queries can all help handle increasing database workloads. By the end of this book, you will have all the knowledge you need to design, run, and manage your PostgreSQL solution while ensuring high performance and high availability

Who is this book for?

This book is designed for database administrators and PostgreSQL architects who already use or plan to exploit the features of PostgreSQL 10 to design and maintain a high-performance PostgreSQL database. A working knowledge of SQL, and some experience with PostgreSQL will be helpful in getting the most out of this book.

What you will learn

  • Learn best practices for scaling PostgreSQL 10 installations
  • Discover the best hardware for developing high-performance PostgreSQL applications
  • Benchmark your whole system – from hardware to application
  • Learn by real examples how server parameters impact performance
  • Discover PostgreSQL 10 features for partitioning and parallel query
  • Monitor your server, both inside and outside the database
  • Design and implement a good replication system on PostgreSQL 10
Estimated delivery fee Deliver to Taiwan

Standard delivery 10 - 13 business days

$12.95

Premium delivery 5 - 8 business days

$45.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Apr 30, 2018
Length: 508 pages
Edition : 3rd
Language : English
ISBN-13 : 9781788474481
Vendor :
PostgreSQL Global Development Group
Category :
Languages :
Tools :

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Estimated delivery fee Deliver to Taiwan

Standard delivery 10 - 13 business days

$12.95

Premium delivery 5 - 8 business days

$45.95
(Includes tracking information)

Product Details

Publication date : Apr 30, 2018
Length: 508 pages
Edition : 3rd
Language : English
ISBN-13 : 9781788474481
Vendor :
PostgreSQL Global Development Group
Category :
Languages :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total $ 147.97
PostgreSQL 10 Administration Cookbook
$54.99
Mastering PostgreSQL 10
$43.99
PostgreSQL 10 High Performance
$48.99
Total $ 147.97 Stars icon

Table of Contents

17 Chapters
PostgreSQL Versions Chevron down icon Chevron up icon
Database Hardware Chevron down icon Chevron up icon
Database Hardware Benchmarking Chevron down icon Chevron up icon
Disk Setup Chevron down icon Chevron up icon
Memory for Database Caching Chevron down icon Chevron up icon
Server Configuration Tuning Chevron down icon Chevron up icon
Routine Maintenance Chevron down icon Chevron up icon
Database Benchmarking Chevron down icon Chevron up icon
Database Indexing Chevron down icon Chevron up icon
Query Optimization Chevron down icon Chevron up icon
Database Activity and Statistics Chevron down icon Chevron up icon
Monitoring and Trending Chevron down icon Chevron up icon
Pooling and Caching Chevron down icon Chevron up icon
Scaling with Replication Chevron down icon Chevron up icon
Partitioning Data Chevron down icon Chevron up icon
Avoiding Common Problems Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Full star icon Half star icon Empty star icon Empty star icon 2.5
(2 Ratings)
5 star 0%
4 star 0%
3 star 50%
2 star 50%
1 star 0%
Justin Pryzby Sep 26, 2018
Full star icon Full star icon Full star icon Empty star icon Empty star icon 3
The book is okay, and has a bunch of good information consolidated in one place.But it's also full of the same typos as the original book for 9.0, having apparently not been passed through basic nor technical editor, proofreader, nor review.And still has a bunch of outdated information from the postgres8.3 and 9.0 days when the book was first authored. For example, in the span of 5 paragraphs, we read, first: "...we can use multiple cores for every single query...", then, "...PG does not allow splitting a single query across more than one core...".What's the point of an update for v9.6 and then v10 if it's not done well and to completion ?
Amazon Verified review Amazon
Dustin J. Mitchell Jan 20, 2020
Full star icon Full star icon Empty star icon Empty star icon Empty star icon 2
First, despite the "10" in the title and the 2018 publication date, most of this book is based on Postgres 8.3 (2008) with mentions of 9.4 (2014) and occasional reference to 10 (2017) in a future-y wild-west kind of way. It also spends a good bit of time talking about hardware selection, including details of some now-rather-outdated processor architectures (Nehalem, 2008), good old 7200RPM spinning-rusty disks, and fancy "new" (2007) DDR3 RAM -- yet if you're deploying a DB today, it's probably hosted in a cloud, making all of those considerations are moot. So if you're looking to catch up on the latest developments in Postgres, keep looking.Second, this book is un-edited. Not poorly edited, no: from the authors keyboards to your eyes, with all of the omitted words, stray newlines, weird formatting, and nonsenical statements that an author trying to make a deadline will write. Sadly, I think this is the state of this genre of books these days: editors are expensive! So if you're planning to study the content carefully in hopes of understanding the nuances of MVCC or VACUUM FULL, you'll be scratching your head over nothing. I suggest you skip the book and go straight to the web.Having said all that, if you'd like an overview of the things that experienced Postgres DBAs think about, along with some high-level advice ("tweak this setting when..", "this sounds great but has the following downsides..") then this might be worth picking up.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact customercare@packt.com with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at customercare@packt.com using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on customercare@packt.com with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on customercare@packt.com within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on customercare@packt.com who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on customercare@packt.com within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela