Large deployments of PostgreSQL systems go through several common phases as the number of database clients increases. You're likely to run into disk bottlenecks initially. These can sometimes be bypassed by reorganizing the system so more of the active data is in RAM. Once that's accomplished, and the system is sized properly so the database is mainly returning information that's in fast memory, it's quite easy to move onto a new bottleneck. One possibility is that you might then be limited by the relatively high overhead of creating a database connection and asking it for data.
When reaching that point, there are two major approaches to consider. You can reuse database connections with pooling, or try and cache database activity outside of the database. The best part is that these two approaches both stack on top of one another. You can, for example...