Profiling the memory usage of your code with memory_profiler
The methods described in the previous recipe were about CPU time profiling. That may be the most obvious factor when it comes to code profiling. However, memory is also a critical factor. Writing memory-optimized code is not trivial and can really make your program faster. This is particularly important when dealing with large NumPy arrays, as we will see later in this chapter.
In this recipe, we will look at a simple memory profiler unsurprisingly named memory_profiler
. Its usage is very similar to line_profiler
, and it can be conveniently used from IPython.
Getting ready
You can install memory_profiler
with conda install memory_profiler
.
How to do it...
We load the
memory_profiler
IPython extension:>>> %load_ext memory_profiler
We define a function that allocates big objects:
>>> %%writefile memscript.py def my_func(): a = [1] * 1000000 b = [2] * 9000000 del b return a
Now, let's...