Preface
In recent years, the volume of data being collected, stored, and analyzed has exploded, in particular in relation to the activity on the Web and mobile devices, as well as data from the physical world collected via sensor networks. While previously large-scale data storage, processing, analysis, and modeling was the domain of the largest institutions such as Google, Yahoo!, Facebook, and Twitter, increasingly, many organizations are being faced with the challenge of how to handle a massive amount of data.
 With the advent of big data, extracting knowledge from large, heterogeneous, and noisy datasets requires not only powerful computing resources, but the programming abstractions to use them effectively. The abstractions that emerged in the last decade blend ideas from parallel databases, distributed systems, and programming languages to create a new class of scalable data analytics platforms that form the foundation for data science at realistic scales.
The objective of this book is to get the audience the flavor of challenges in data science and addressing them with a variety of analytical tools on a distributed system such as Spark (apt for iterative algorithms), which offers in-memory processing and more flexible for data analysis at scale. This book introduces readers to the fundamentals of Spark and helps them learn the concepts with code examples. It also talks in brief about data mining, text mining, NLP, machine learning, and so on. The readers get to know how to solve real-world analytical problems with large datasets and are made aware of a very practical approach and code to use analytical tools that leverage the features of Spark.