One of the key advantages of both the page and slab allocators is that the memory chunk they provide upon allocation is not only virtually contiguous (obviously) but is also guaranteed to be physically contiguous memory. Now that is a big deal and will certainly help performance.
But (there's always a but, isn't there!), precisely because of this guarantee, it becomes impossible to serve up any given large size when performing an allocation. In other words, there is a definite limit to the amount of memory you can obtain from the slab allocator with a single call to our dear k[m|z]alloc() APIs. What is the limit? (This is indeed a really frequently asked question.)
Firstly, you should understand that, technically, the limit is determined by two factors:
- One, the system page size (determined by the PAGE_SIZE macro)
- Two, the number of "orders" (determined by the MAX_ORDER macro...