Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
PostgreSQL High Availability Cookbook

You're reading from   PostgreSQL High Availability Cookbook Managing a reliable PostgreSQL database

Arrow left icon
Product type Paperback
Published in Feb 2017
Publisher Packt
ISBN-13 9781787125537
Length 536 pages
Edition 2nd Edition
Arrow right icon
Author (1):
Arrow left icon
Shaun Thomas Shaun Thomas
Author Profile Icon Shaun Thomas
Shaun Thomas
Arrow right icon
View More author details
Toc

Table of Contents (12) Chapters Close

Preface 1. Hardware Planning FREE CHAPTER 2. Handling and Avoiding Downtime 3. Pooling Resources 4. Troubleshooting 5. Monitoring 6. Replication 7. Replication Management Tools 8. Simple Stack 9. Advanced Stack 10. Cluster Control 11. Data Distribution

Picking a processor

In selecting a CPU for our server, we have a lot to consider. At the time of writing, the current trend among processors in every space-including mobile-is toward multiple cores per chip. CPU manufacturers have found that providing a large number of smaller processing units spreads workload horizontally for better overall scalability.

As users of PostgreSQL, this benefits us tremendously. PostgreSQL is based on processes instead of threads. This means each connected client is assigned to a process that can use a CPU core when available. The host operating system can perform such allocations without any input from the database software. Motherboards have limited space, so we need more cores on the same limited real estate, which means more simultaneously active database clients.

Once again, our discussion veers toward capacity planning for a three or four year cycle. Limited processing capability leads to slow or delayed queries, or a database that is incapable of adequately handling increasing amounts of simultaneous users. Yet simply choosing the fastest CPU with the most cores and filling the motherboard can be a staggering waste of resources. So how, then, do we know what to buy?

That's what we're here to figure out.

Getting ready

Luckily, there are only really two manufacturers that produce commodity server-class CPUs. Furthermore, each vendor has a line of CPU designed specifically for server use. AMD and Intel both provide a similar price to performance curves, but that's where the comparison ends.

At the time of writing, the Intel Xeon CPUs benchmark is significantly higher than equivalently priced AMD Opterons. This is true for both mid-range and high-end processors. Before going through this recipe, it would be a good idea to visit AnandTech, Tom's Hardware, Intel, and AMD, just to get a basic idea of the landscape. There are a lot of benchmarks that compare various models of CPUs, so don't take our word for it.

Because of this current performance disparity, we'll focus exclusively on Intel processors for now. This situation has changed in the past, and may do so again in the future.

How to do it...

We can collect some of the information we want from the database if we have one already. If we already have a PostgreSQL database available, we can execute a query to start our calculations. This works best if used at the most active time of day.

Execute this query as a superuser to get the count of simultaneous active users if you have PostgreSQL 9.2 or higher:

SELECT count(1) FROM pg_stat_activity 
 WHERE state = 'active'; 

Use this query if you have an older version:

SELECT count(1) FROM pg_stat_activity 
 WHERE current_query NOT LIKE '<IDLE>%'; 

If we don't have a PostgreSQL server, we need to make an educated guess. Use these steps to approximate:

  1. Work with the application developers to obtain a count of expected clients active per second.
  2. Divide the previous number by 50 to remain consistent with our 20ms query assumption.

Once we have some idea of how many queries will be active simultaneously, we need to figure out the processor count. Follow these steps:

  1. If we already know how many disks will store our data, use this number. In the case of an SSD base, use 0.
  2. Subtract the previous number from our count of active users.
  3. Divide the previous result by two.
  4. Apply the following formula, where x is the value from the previous step: x * (1.4)^3.

How it works...

Before we can even begin to decide on a processor count, we need a baseline. With a working PostgreSQL server to base our numbers on, we can just use the amount of existing users during a busy period. Without that, we need to guess. This guess can actually be pretty close, depending on how the application was targeted. If the intent is to service 1000 users per second, we should start there since that's the same assumption the company is using to buy application and web servers.

After that, we are applying a commonly accepted formula used by PostgreSQL administrators for a very long time. The ideal number of active connections is equal to twice the amount of available processor cores, plus the amount of disk spindles. Amusingly, the disk spindles increase the ideal number of connections because they contribute seek time, which forces the processor to wait for information. While a processor is waiting for input for one connection, the operating system may decide to lend the processor to another until the data is retrieved.

So, we apply that accepted formula in reverse. First, we subtract the number of spindles, and then divide by two to obtain how many CPUs we should have for our expected workload.

Afterwards, we assume a 40 percent increase in active clients on a yearly basis, and increase the CPU core count accordingly for three years. Note that this is a very aggressive growth rate. If we have historical growth data available, or the company is expecting a different value, we should use that instead.

When purchasing CPUs, no matter how cores are distributed, the final total should be equal or greater than the number we calculated. If it isn't, the application may require more aggressive caching than expected, or we may need to horizontally scale the database. We're not ready to introduce that yet, but keep it in mind for later.

There's more...

The processor count is only part of the story. Intel CPUs have a few added elements we need to consider.

Hyperthreading

Newer generations of Intel processors often provide a feature called hyperthreading, which splits each physical processor core into two virtual cores. Historically, this was not well received, as benchmarks often illustrated performance degradation when the feature was enabled.

Since the introduction of Nehalem-based architecture in 2008, this is no longer the case. While doubling the processor count does not result in a doubling of throughput, we've run several tests that show up to 40 percent improvement over using physical cores alone. This may not be universal, but it does apply to PostgreSQL performance tests. What this means is that the commonly accepted formula for determining ideal connection count requires modification.

Current advice is to only multiply the physical core count by two. Assuming a 40 percent increase by enabling hyperthreading, the new formula becomes: 2 * 1.4 * CPUs + spindles. With that in mind, if we wanted to serve 1000 connections per second, and used SSDs to host our data, our minimum CPU count would be: 1000 / 50 / 1.4, or 14. Half of that is seven, but no CPU has seven physical cores, so we would need at least eight. If we used the physical cores alone for our calculation, we would need 10.

Turbo Boost

Recent Intel processors also have something called Turbo Boost. Some vendor motherboards disable this by default. Make sure to go through BIOS settings before performing acceptability tests, as turbo mode can provide up to 25 percent better performance in isolated cases.

This is possible because the maximum speed of the core itself is increased when resources are available. A 2.6 GHz core might operate temporarily at 3.0 GHz. For queries that are dependent on nested loops or other CPU-intensive operations, this can drastically reduce query execution times.

Power usage

Intel family chips often have low voltage versions of their high performance offerings. While these processors require up to 30 percent less electricity, they also run up to 25 percent slower. Low power name designations are not always consistent, so when choosing a processor, make sure to compare specifications of all similarly named chips.

Beware of accidentally choosing a low power chip meant for a high performance database. However, these chips may be ideal for warehouse or reporting database use, since those systems are not meant for high throughput or vast amounts of simultaneous users. They often cost less than their high-performance counterparts, making them perfect for systems expecting low utilization.

See also

You have been reading a chapter from
PostgreSQL High Availability Cookbook - Second Edition
Published in: Feb 2017
Publisher: Packt
ISBN-13: 9781787125537
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image