The UDF
One very powerful tool to consider using with Spark is the UDF. UDFs are custom per-row transformations in native Python that run in parallel on your data. The obvious question is, why not only use UDFs? After all, they are also more flexible. There is a hierarchy of tools you should look to use for speed reasons. Speed is a significant consideration and should not be ignored. Ideally, you should get the most bang for your buck using Python DataFrame APIs and their native functions/methods. DataFrames go through many optimizations, so they are ideally suited for semi-structured and structured data. The methods and functions Spark provides are also heavily optimized and designed for the most common data processing tasks. Suppose you find a case where you just can’t do what is required with the native functions and methods and you are forced to write UDFs. UDFs are slower because Spark can’t optimize them. They take your native language code and serialize it into...