Summary
In this chapter, we have concluded our journey by discussing aspects of distributed computing performance, and what to exploit when writing your own scalable analytics. Hopefully, you've come away with a sense of some of the challenges involved, and have a better understanding of how Spark works under the covers.
Apache Spark is a constantly evolving framework and new features and improvements are being added every day. No doubt it will become increasingly easier to use as continuous tweaks and refinements are intelligently applied into the framework, automating much of what must be done manually today.
In terms of what's next, who knows what's round the corner? But with Spark beating the competition yet again to win the 2016 CloudSort Benchmark (http://sortbenchmark.org/) and new versions set to be released every four months, one thing is for sure, it's going to be fast-paced. And hopefully, with the solid principles and methodical guidelines that you've...