dd
dd
is a standard UNIX utility that's capable of reading and writing blocks of data very efficiently. To use it properly for disk testing of sequential read and write throughput, you'll need to have it work with a file that's at least twice the size of your total server RAM. That will be large enough that your system cannot possibly cache all of the read and write operations in memory, which would significantly inflate results. The preferable block size needed by dd
is to use 8 KB blocks, to match how the database is going to do sequential read and write operations. At that size, a rough formula you can use to compute how many such blocks are needed to reach twice your RAM size is as follows:
blocks = 250,000 * (gigabytes of RAM)
Once you know that number, the following simple commands will time writing out a file large enough to not fit in the OS RAM cache, and then read the results back:
time sh -c "dd if=/dev/zero of=bigfile bs=8k count=blocks &&...