Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Practical Data Analysis

You're reading from   Practical Data Analysis Pandas, MongoDB, Apache Spark, and more

Arrow left icon
Product type Paperback
Published in Sep 2016
Publisher
ISBN-13 9781785289712
Length 338 pages
Edition 2nd Edition
Languages
Arrow right icon
Authors (2):
Arrow left icon
Hector Cuesta Hector Cuesta
Author Profile Icon Hector Cuesta
Hector Cuesta
Dr. Sampath Kumar Dr. Sampath Kumar
Author Profile Icon Dr. Sampath Kumar
Dr. Sampath Kumar
Arrow right icon
View More author details
Toc

Table of Contents (16) Chapters Close

Preface 1. Getting Started 2. Preprocessing Data FREE CHAPTER 3. Getting to Grips with Visualization 4. Text Classification 5. Similarity-Based Image Retrieval 6. Simulation of Stock Prices 7. Predicting Gold Prices 8. Working with Support Vector Machines 9. Modeling Infectious Diseases with Cellular Automata 10. Working with Social Graphs 11. Working with Twitter Data 12. Data Processing and Aggregation with MongoDB 13. Working with MapReduce 14. Online Data Analysis with Jupyter and Wakari 15. Understanding Data Processing using Apache Spark

What about big data?

Big data is a term used when the data exceeds the processing capacity of a typical database. The integration of computer technology into science and daily life has enabled the collection of massive volumes of data, such as climate data, website transaction logs, customer data, and credit card records. However, such big datasets cannot be practically managed on a single commodity computer because their sizes are too large to fit in memory, or it takes more time to process the data. To avoid this obstacle, one may have to resort to parallel and distributed architectures, with multicore and cloud computing platforms providing access to hundreds or thousands of processors. For the storing and manipulation of big data, parallel and distributed architectures show new capabilities.

Now, big data is a truth: the variety, volume, and velocity of data coming from the Web, sensors, devices, audio, video, networks, log files, social media, and transactional applications reach exceptional levels. Now big data has also hit the business, government, and science sectors. This phenomenal growth means that not only must we understand big data in order to interpret the information that truly counts, but also the possibilities of big data analytics.

There are three main features of big data:

  • Volume: Large amounts of data
  • Variety: Different types of structured, unstructured, and multistructured data
  • Velocity: Needs to be analyzed quickly

As is shown in the following image, we can see the interaction between these three Vs:

What about big data?

We need big data analytics when data grows fast and needs to uncover hidden patterns, unknown correlations, and other useful information that can be used to make better decisions. With big data analytics, data scientists and others can analyze huge volumes of data that conventional analytics and business intelligence solutions cannot in order to transform business decisions for the future. Big data analytics is a workflow that distils terabytes of low-value data.

Big data is an opportunity for any company to take advantage of data aggregation, data exhaustion, and metadata. This makes big data a useful business analytics tool, but there is a common misunderstanding of what big data actually is.

The most common architecture for big data processing is through Map-Reduce, which is a programming model for processing large datasets in parallel using a distributed cluster.

Apache Hadoop is the most popular implementation of MapReduce, and it is used to solve large-scale distributed data storage, analysis, and retrieval tasks. However, MapReduce is just one of three classes of technologies that store and manage big data. The other two classes are NoSQL and Massively Parallel Processing (MPP) data stores. In this book we will implement MapReduce functions and NoSQL storage through MongoDB in Chapter 12, Data Processing and Aggregation with MongoDB, and Chapter 13, Working with MapReduce.

MongoDB provides us with document-oriented storage, high availability, and map/reduce flexible aggregation for data processing.

A paper published by IEEE in 2009 The Unreasonable Effectiveness of Data says the following:

"But invariably, simple models and a lot of data trump over more elaborate models based on less data."

This is a fundamental idea in big data (you can find the full paper at http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35179.pdf). The trouble with real-world data is that the probability of finding false correlations is high and gets higher as the datasets grows. That's why, in this book, we will focus on meaningful data instead of big data.

One of the main challenges for big data is how to store, protect, back up, organize, and catalog the data in a petabyte scale. Another of the main challenges of big data is the concept of data ubiquity. With the proliferation of smart devices with several sensors and cameras, the amount of data available for each person increases every minute. Big data must be able to process all those data in real time:

What about big data?

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime